SimplyPrint: 3D Print Farm Management
Server Details
3D print farm management for AI. Monitor, queue, and control prints on your SimplyPrint account.
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- SimplyPrint/simplyprint-claude-plugin
- GitHub Stars
- 1
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Average 3.4/5 across 69 of 69 tools scored. Lowest: 2.7/5.
Many tools target distinct resources (queue, printer, filament, file), but there is overlap between similar operations like 'get_next_queue_item' and 'get_next_queue_items_for_printers', or 'inspect_printer_queue' and 'list_queue'. Descriptions help resolve ambiguity, but the large number of tools increases the chance of misselection.
Most tools follow a consistent verb_noun pattern (e.g., 'cancel_print', 'list_printers', 'update_file'). Minor deviations exist, such as 'add_to_queue' instead of 'add_queue_item', and 'get_filament' vs 'list_filaments' where the verb prefix differs. Overall predictable.
With 69 tools, the server is very large for an MCP server. While justified by the broad domain (queue management, printers, filament, files, maintenance, etc.), the sheer number can overwhelm agents and hinder efficient tool selection. A more focused subset would improve usability.
The tool surface covers most core operations: queue CRUD, printer control (pause, resume, cancel, home, send G-code), filament management, file operations, and maintenance dashboard. Minor gaps exist, such as lacking printer calibration or detailed slicing options, but the core workflows are well-supported.
Available Tools
70 toolsadd_queue_commentCInspect
Add a comment (general or feedback) to a queue item or user file. File attachments are not supported via MCP.
| Name | Required | Description | Default |
|---|---|---|---|
| type | No | ||
| comment | No | ||
| file_id | No | ||
| item_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a non-read-only, non-destructive, non-idempotent operation (write operation with potential side effects). The description adds useful context about the file attachment limitation, which isn't captured in annotations. However, it doesn't disclose other behavioral traits like authentication requirements, rate limits, or what happens when adding comments to different resource types.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that each add value: the first states the core functionality, the second provides an important limitation. There's no unnecessary repetition or fluff, though it could be slightly more structured by explicitly mentioning parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 4 undocumented parameters, no output schema, and annotations that only cover basic operation type, the description is insufficient. It doesn't explain what the tool returns, how parameters interact (e.g., whether file_id and item_id are mutually exclusive), or provide enough context for an agent to use it effectively without trial and error.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 4 parameters, the description provides no information about what 'type', 'comment', 'file_id', or 'item_id' mean or how they should be used. The mention of 'general or feedback' comments and 'queue item or user file' hints at parameter usage but doesn't clarify which parameters correspond to which resources or what values are expected.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Add a comment') and the target resources ('to a queue item or user file'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'update_queue_comment' or 'delete_queue_comment' beyond mentioning file attachment limitations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'update_queue_comment' or 'delete_queue_comment'. It mentions a limitation ('File attachments are not supported via MCP') but doesn't explain when this tool is appropriate versus other comment-related tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
add_to_queueAInspect
Queue a file for printing. Pick exactly one file source: fileId (hex hash from the files.simplyprint.io Upload endpoint) or filesystem (UserFile.uid of an existing library file). Supports PRINT_QUEUE custom fields.
| Name | Required | Description | Default |
|---|---|---|---|
| tags | No | ||
| group | No | Queue group id to add the item to. Use list_queue_groups to discover available groups. | |
| amount | No | Number of copies to queue. Default 1. | |
| fileId | No | Hex bucket hash returned by the files.simplyprint.io Upload endpoint. Use this when the file was uploaded via the API (recommended for integrations). | |
| position | No | Where to insert: "top", "bottom", or a 1-based numeric position. | |
| filesystem | No | UserFile.uid of an existing library file (use list_files to discover). Either numeric id or UID string is accepted. | |
| for_groups | No | ||
| for_models | No | ||
| for_printers | No | ||
| custom_fields | No | PRINT_QUEUE custom fields for the new queue item. Each entry: {customFieldId: string uuid, value: one-of {string, number, boolean, date, options[]}}. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-destructive, non-idempotent, closed-world operation, which the description aligns with by implying a mutation (queueing) without contradicting. The description adds useful context about the prerequisite (file must exist and be discovered via list_files), but does not disclose additional behavioral traits like rate limits, authentication needs, or what happens on failure. With annotations covering basic safety, a 3 is appropriate as it adds some value but not rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core action and followed by a prerequisite. Every word earns its place, with no redundancy or unnecessary details, making it efficient and easy to parse for an AI agent.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (9 parameters, mutation operation) and lack of output schema, the description is reasonably complete. It covers the purpose, usage guidelines, and key parameter semantics for the required 'filesystem' field. However, it does not explain return values or error handling, which could be useful given the mutation nature. With annotations providing safety context, it's mostly adequate but has minor gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 33% (3 out of 9 parameters have descriptions), so the schema provides limited documentation. The description adds meaning by explaining the 'filesystem' parameter ('ID of an existing UserFile to queue (from list_files)'), which compensates partially. However, it does not cover other parameters like 'tags', 'group', or 'for_groups', leaving gaps. Baseline 3 reflects that the description adds some value but does not fully compensate for low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Queue an already-uploaded file for printing'), identifies the resource ('file'), and distinguishes it from siblings by specifying the prerequisite ('The file must already exist in the user's SimplyPrint files'). It explicitly references the sibling tool 'list_files' for discovery, making the purpose unambiguous and distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Queue an already-uploaded file for printing') and when not to use it ('The file must already exist... use list_files first to discover its id'). It names the alternative tool ('list_files') for the prerequisite step, offering clear context for usage versus other options like 'upload_file' or 'print_file'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
adjust_filament_weightCInspect
Adjust the remaining weight of a filament spool (e.g. after manual weighing).
| Name | Required | Description | Default |
|---|---|---|---|
| filament_id | Yes | The filament ID | |
| percent_left | No | ||
| weighed_gross | No | ||
| grams_remaining | No | ||
| empty_spool_weight | No | ||
| subtract_empty_spool | No | ||
| save_empty_spool_weight | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-destructive, non-idempotent, closed-world operation, which the description aligns with by implying a state change ('adjust') without contradicting these hints. The description adds context about manual weighing, but doesn't disclose behavioral traits like permission requirements, rate limits, or how conflicts with other tools (e.g., 'assign_filament') are handled, relying on annotations for basic safety.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary details. Every word contributes to understanding the tool's function, though it could be slightly more structured by explicitly mentioning key parameters or usage scenarios.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (7 parameters, low schema coverage, no output schema) and annotations providing only basic hints, the description is insufficient. It lacks details on parameter usage, expected outcomes, error conditions, or how it integrates with sibling tools like 'get_filament', making it incomplete for effective agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With low schema description coverage (14%), the description does not compensate by explaining parameter meanings or interactions. It mentions 'manual weighing' but doesn't clarify how parameters like 'percent_left', 'weighed_gross', or 'subtract_empty_spool' relate to this process, leaving most parameters semantically undefined beyond their schema constraints.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('adjust') and resource ('remaining weight of a filament spool'), with a specific example ('after manual weighing') that clarifies the context. It distinguishes from siblings like 'assign_filament' or 'mark_filament_dried' by focusing on weight adjustment rather than assignment or status changes. However, it doesn't explicitly differentiate from potential weight-related tools that might exist elsewhere.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance, only implying usage 'after manual weighing' without specifying when to use this tool versus alternatives like 'get_filament' for checking weight or other update tools. No explicit when-not-to-use instructions, prerequisites, or comparisons to sibling tools are included, leaving the agent with little contextual direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
approve_queue_itemBInspect
Approve one or more pending/revision/denied queue items, with optional comment.
| Name | Required | Description | Default |
|---|---|---|---|
| job | No | ||
| jobs | No | ||
| comment | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-readOnly, non-destructive operation. The description adds value by specifying it approves items in certain states ('pending/revision/denied') and allows optional comments, but doesn't disclose behavioral details like permission requirements, side effects, or rate limits. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and includes key details (item states, optional comment). Every word adds value with no redundancy or wasted space.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool (non-readOnly) with no output schema and low parameter coverage, the description is insufficient. It lacks details on return values, error conditions, or full parameter explanations, making it incomplete for safe and effective use by an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 3 parameters, the description only mentions 'optional comment', which partially covers one parameter ('comment'). It doesn't explain 'job' (integer) or 'jobs' (string), leaving most parameters undocumented and failing to compensate for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Approve') and target ('pending/revision/denied queue items'), with optional comment functionality. It specifies the verb and resource but doesn't explicitly differentiate from sibling tools like 'deny_queue_item' or 'send_back_for_revision', which handle similar queue items differently.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'pending/revision/denied queue items', suggesting it's for items in specific states. However, it lacks explicit guidance on when to use this vs. alternatives like 'deny_queue_item' or 'send_back_for_revision', and doesn't mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
archive_print_jobBInspect
Archive a print job (hide from default history listings).
| Name | Required | Description | Default |
|---|---|---|---|
| jobs | Yes | ||
| reason | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-destructive, non-idempotent operation. The description adds that archiving 'hides from default history listings', which clarifies the behavioral effect beyond annotations. However, it doesn't address permissions, rate limits, or reversibility (though 'unarchive_print_job' exists as a sibling).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and effect. There is no wasted verbiage, making it highly concise and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool (non-read-only) with no output schema and 0% schema description coverage, the description is insufficient. It lacks parameter explanations, error handling details, and output expectations, leaving significant gaps for agent usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions no parameters at all, failing to explain that 'jobs' is an array of job IDs or that 'reason' is optional for archiving rationale. This leaves parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('archive') and resource ('print job'), and specifies the effect ('hide from default history listings'). It doesn't explicitly differentiate from sibling 'unarchive_print_job', but the action is distinct enough to imply differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'delete_file' or 'cancel_print', nor does it mention prerequisites or exclusions. It only states what the tool does, not when to use it.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
assign_filamentBInspect
Assign a filament spool to a printer (or a specific extruder slot).
| Name | Required | Description | Default |
|---|---|---|---|
| source | No | ||
| extruder | No | ||
| printer_id | Yes | The printer ID | |
| filament_id | Yes | The filament IDs (comma-separated for multiple) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-readOnly, non-destructive, non-idempotent mutation. The description adds context about assigning to specific extruder slots, which isn't covered by annotations. However, it doesn't mention permission requirements, side effects, or what happens if the assignment fails.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that immediately states the tool's purpose. There's no wasted verbiage, and it's appropriately front-loaded with the core functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 4 parameters and no output schema, the description is minimal. It covers the basic action but lacks details on error conditions, return values, or integration with sibling tools like 'unassign_filament'. The annotations help but don't fully compensate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 50% schema description coverage (only printer_id and filament_id have descriptions), the description mentions 'extruder slot' which corresponds to the 'extruder' parameter, adding some value. However, it doesn't explain the 'source' parameter or provide format details for filament_id's comma-separated handling.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('assign') and resource ('filament spool to a printer'), specifying the optional extruder slot. It distinguishes from sibling 'unassign_filament' by being the opposite operation, though it doesn't explicitly mention this distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. While 'unassign_filament' exists as an inverse operation, the description doesn't mention it or any prerequisites like printer/filament availability.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cancel_printBDestructiveInspect
Cancel the current print on a printer
| Name | Required | Description | Default |
|---|---|---|---|
| reason | No | Cancel reason ID. May be required depending on organization settings. | |
| comment | No | Optional comment explaining why the print was cancelled. May be required depending on organization settings. | |
| printer_id | Yes | The printer IDs (comma-separated for multiple) | |
| custom_position | No | Custom sort position when return_position is 'custom'. | |
| return_position | No | Position to insert queue item at when returning to queue. | |
| return_to_queue | No | Whether to return cancelled queued print to the queue. Uses organization default if not specified. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true, which aligns with the description's 'Cancel' action implying irreversible change. The description adds minimal behavioral context beyond annotations—it doesn't detail effects like job termination or queue handling, but doesn't contradict annotations either.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words, making it highly concise and front-loaded. It efficiently communicates the core action without unnecessary elaboration, earning full marks for brevity and structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and rich annotations, the description is minimal but adequate. It covers the basic purpose but lacks details on outcomes, error conditions, or integration with sibling tools, leaving gaps in full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all 6 parameters. The description adds no additional parameter semantics, such as explaining the impact of 'return_to_queue' or 'reason' codes, so it meets the baseline for high schema coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Cancel') and target ('current print on a printer'), which is specific and unambiguous. However, it doesn't differentiate from sibling tools like 'pause_print' or 'remove_from_queue', which also affect print operations, so it misses full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as 'pause_print' or 'remove_from_queue'. It lacks context about prerequisites, timing, or organizational settings that might affect its use, offering only basic functional intent.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_folderCInspect
Create a new folder (or edit an existing one with optional org-level permissions).
| Name | Required | Description | Default |
|---|---|---|---|
| org | No | ||
| name | Yes | ||
| item_id | No | ||
| org_perms | No | ||
| parent_folder | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a non-readOnly, non-destructive operation. The description adds that it can 'edit an existing one with optional org-level permissions,' which provides useful context about dual functionality and permission capabilities not covered by annotations. However, it doesn't disclose important behavioral details like whether folder creation requires specific permissions, what happens on conflicts, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core functionality. It's appropriately front-loaded with the primary action. However, the dual functionality ('create or edit') creates some ambiguity that slightly reduces clarity, preventing a perfect score.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters, 0% schema coverage, no output schema, and annotations that only cover basic safety hints, the description is inadequate. It doesn't explain what parameters do, what the tool returns, error conditions, or important behavioral constraints. The agent would struggle to use this tool correctly without additional documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 5 parameters, the description carries full burden for parameter documentation but provides almost none. It mentions 'org-level permissions' which hints at the 'org_perms' parameter, and 'edit an existing one' suggests 'item_id' might be for existing folders, but doesn't explain the purpose of 'name', 'parent_folder', or 'org' parameters. This leaves most parameters semantically undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Create a new folder' which provides a clear verb+resource, but the addition 'or edit an existing one with optional org-level permissions' creates ambiguity about whether this is primarily a creation or editing tool. It distinguishes from obvious siblings like 'delete_folder' but doesn't clearly differentiate from 'move_folder' or 'update_file' in terms of folder modification capabilities.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, when to choose this over other folder-related tools like 'move_folder' or 'delete_folder', or any context about the optional editing functionality. The agent receives no usage direction beyond the basic operation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
create_print_jobAInspect
Start a print job on one or more printers. File source is exactly one of: file_id (API file hash from upload), filesystem (user-file uid), queue_file (existing queue item id), reprint (previous print-job id), or next_queue_item=true (auto-pick the next matching queue item per printer, deduplicated across printers). Supports PRINT_JOB custom fields (shared and per-printer). Auto-starts when the account's autostartPrints setting is on (default).
| Name | Required | Description | Default |
|---|---|---|---|
| file_id | No | Hex bucket hash returned by the files.simplyprint.io Upload endpoint. Choose this when starting from a file uploaded via the API. | |
| mms_map | No | ||
| reprint | No | Previous print_job id to reprint with the same file and settings. | |
| filesystem | No | UserFile.uid of an existing library file. Choose this when starting from an already-imported file. | |
| printer_id | Yes | Comma-separated printer id(s) to start the job on. Must be operational. When next_queue_item=true, each printer gets a different queue item (same item never duplicated within one call). | |
| queue_file | No | Existing print queue item id. Choose this to start the job from a queued item (most common flow). | |
| custom_fields | No | PRINT_JOB custom fields shared across all started jobs. Each entry is an object with customFieldId (string uuid) and value (one-of string/number/boolean/date/options). | |
| start_options | No | ||
| next_queue_item | No | If true, auto-pick the next matching queue item for each printer in pid. Uses the same compatibility matcher as get_next_queue_items_for_printers. Mutually exclusive with file_id/filesystem/queue_file/reprint. | |
| individual_custom_fields | No | Per-printer or per-queue-item PRINT_JOB custom fields. Each entry: {id: <string>, value: [customFieldSubmissions]}. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=false, which are not contradicted. The description adds context about file source exclusivity and printer requirement ('Must be operational'), going beyond annotations. No mention of rate limits or authentication, but acceptable.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences, front-loaded with purpose, and no unnecessary words. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
With 9 parameters and no output schema, the description omits key details about 'start_options' and 'mms_map' (which lack schema descriptions). It also does not mention what the tool returns (e.g., print job ID). This leaves gaps for a complex tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema coverage is 78% (high). The description adds meaning by clarifying that file sources are mutually exclusive and custom fields can be shared or per-printer. This enhances understanding beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'Start a print job on one or more printers' with specific verbs and resource. It distinguishes from sibling tools like 'add_to_queue' which adds to queue but doesn't start, and 'cancel_print' for cancellation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It explicitly lists the four mutually exclusive file sources (file_id, filesystem, queue_file, reprint) and explains each. However, it does not explicitly state when not to use this tool versus alternatives like 'add_to_queue' or 'resubmit_queue_item'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_fileBDestructiveInspect
Delete one or more files.
| Name | Required | Description | Default |
|---|---|---|---|
| file | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a destructive, non-idempotent write operation (destructiveHint: true, readOnlyHint: false, idempotentHint: false). The description adds minimal behavioral context beyond this—it specifies 'one or more files' which hints at batch capability, but doesn't clarify permissions, error handling, or what happens to deleted files. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words. It's front-loaded with the core action and resource, making it easy to scan and understand immediately. Every word earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and minimal annotations beyond basic hints, the description is inadequate. It lacks details on permissions, error cases, batch behavior, or what 'delete' entails (e.g., permanent vs. recoverable). Given the complexity and risk of file deletion, more context is needed for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter with 0% description coverage, and the description provides no information about the 'file' parameter. It doesn't explain what format 'file' expects (e.g., file path, ID, pattern), whether it supports wildcards for multiple files, or any constraints. With low schema coverage, the description fails to compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Delete one or more files' clearly states the action (delete) and resource (files), making the purpose immediately understandable. It distinguishes from siblings like 'delete_folder' by specifying files, but doesn't explicitly contrast with similar tools like 'move_file' or 'update_file' beyond the verb difference.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'delete_folder', 'move_file', or 'archive_print_job'. It doesn't mention prerequisites, consequences, or scenarios where this tool is appropriate versus other deletion or file management operations available in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_folderBDestructiveInspect
Delete one or more folders.
| Name | Required | Description | Default |
|---|---|---|---|
| folder | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true, readOnlyHint=false, etc., so the agent knows this is a non-idempotent write operation. The description adds minimal context by implying it can handle multiple folders, but doesn't detail permissions, reversibility, or error handling. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no wasted words, making it easy to parse. It's front-loaded with the core action and resource, though brevity comes at the cost of detail.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and 0% schema coverage, the description is inadequate. It lacks details on behavior, parameters, and outcomes, failing to compensate for the sparse structured data, which could lead to misuse.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description provides no information about the 'folder' parameter beyond what's implied in the tool name. It doesn't explain the parameter's format, how to specify multiple folders, or any constraints, leaving significant gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and resource ('one or more folders'), making the purpose immediately understandable. It distinguishes from siblings like 'delete_file' by specifying folders, though it doesn't explicitly contrast with 'move_folder' or 'create_folder'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'move_folder' or 'delete_file', nor are prerequisites or exclusions mentioned. The description only states what it does, not when it's appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_queue_commentADestructiveInspect
Delete an approval comment you authored (or any comment if you have the permission).
| Name | Required | Description | Default |
|---|---|---|---|
| item_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate destructiveHint=true, but the description adds valuable context about permission requirements ('you authored or any comment if you have the permission'), which is not covered by annotations. It does not contradict annotations, as 'delete' aligns with destructiveHint.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and constraints without unnecessary words, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and low parameter coverage, the description covers purpose and permissions adequately but lacks details on error cases, return values, or side effects, leaving gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description does not explain the 'item_id' parameter's meaning or format. It adds no semantic information beyond what the schema provides, so it meets the baseline for low coverage without compensation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and the resource ('an approval comment'), specifying authorship/permission constraints. It distinguishes from sibling tools like 'update_queue_comment' by focusing on deletion rather than modification.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides clear context on when to use it (deleting comments you authored or have permission for), but does not explicitly mention when not to use it or name alternatives like 'update_queue_comment' for non-deletion scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
delete_queue_groupADestructiveInspect
Delete a queue group. Optionally move its items to another group via move_to.
| Name | Required | Description | Default |
|---|---|---|---|
| item_id | Yes | ||
| move_to | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a destructive, non-idempotent write operation (destructiveHint=true, readOnlyHint=false). The description adds valuable context by mentioning the optional 'move_to' parameter for item relocation, which clarifies behavioral aspects beyond the annotations. It doesn't specify error conditions or permissions required, but with annotations covering the safety profile, this provides reasonable additional context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with just two sentences that each earn their place. The first sentence states the core action, and the second provides crucial additional context about the optional parameter. There's zero wasted language, and information is appropriately front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and 0% parameter documentation in the schema, the description provides basic but incomplete coverage. It clarifies the main action and one parameter's purpose but leaves the required 'item_id' unexplained and doesn't describe what happens on success/failure. Given the complexity of a destructive operation, more completeness would be expected.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for both parameters, the description must compensate but only partially does so. It mentions 'move_to' as an option for moving items to another group, which adds meaning beyond the schema. However, it doesn't explain 'item_id' at all, leaving a key required parameter undocumented. The description adds some value but doesn't fully address the schema coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete a queue group') and resource ('queue group'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this tool from other deletion tools like 'delete_file' or 'delete_folder' in the sibling list, which would require mentioning what makes queue group deletion distinct.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning the optional 'move_to' parameter for handling items, suggesting this tool should be used when deleting queue groups while potentially preserving their contents. However, it doesn't provide explicit guidance on when to use this versus alternatives like 'empty_queue' or 'delete_queue_comment', nor does it specify prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deny_queue_itemBDestructiveInspect
Deny a pending queue item, either removing it or requesting revisions. Include a comment explaining the decision.
| Name | Required | Description | Default |
|---|---|---|---|
| job | No | ||
| jobs | No | ||
| remove | No | ||
| comment | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true, readOnlyHint=false, and non-idempotent, which the description aligns with by implying a mutation ('deny'). The description adds value by specifying that denial can involve 'removing it or requesting revisions' and requires a comment, which are behavioral details not covered by annotations, enhancing transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and includes essential details without waste. It is appropriately sized and structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, 0% schema coverage, no output schema, and destructive annotations), the description is incomplete. It covers the basic action and comment requirement but lacks details on parameter usage, error conditions, or return values, making it only minimally adequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'comment' but does not explain the semantics of 'job', 'jobs', or 'remove', leaving key parameters unclear. This insufficient compensation results in a low score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('deny a pending queue item') and the outcome ('either removing it or requesting revisions'), which is specific and actionable. However, it does not explicitly distinguish this tool from sibling tools like 'remove_from_queue' or 'send_back_for_revision', which might handle similar denial actions, so it misses full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'approve_queue_item' or other denial-related siblings like 'remove_from_queue' or 'send_back_for_revision'. It mentions the action but lacks context on prerequisites, timing, or exclusions, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
empty_queueADestructiveInspect
DESTRUCTIVE: Delete all items from the queue (optionally filtered by group or done-only). Confirm with the user before calling.
| Name | Required | Description | Default |
|---|---|---|---|
| group | No | ||
| done_items | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare destructiveHint=true, but the description adds valuable context by emphasizing 'DESTRUCTIVE:' upfront and specifying the scope ('all items from the queue') and optional filters. It doesn't contradict annotations and provides additional behavioral insight beyond the structured hints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences, front-loaded with the critical 'DESTRUCTIVE' warning and purpose, followed by a concise usage guideline. Every word serves a clear purpose with no redundancy or fluff.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema, the description covers purpose, parameters, and critical usage warning. Annotations provide safety hints, but the description could mention what 'empty' entails (e.g., permanent deletion vs. archiving) for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description compensates by explaining the optional parameters: 'filtered by group or done-only'. This clarifies that 'group' and 'done_items' are filters for the deletion, adding meaningful context beyond the bare schema types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the verb 'Delete' and the resource 'all items from the queue', with optional filtering by 'group or done-only'. It clearly distinguishes this destructive operation from sibling tools like 'remove_from_queue' or 'delete_queue_group' by specifying it targets all items (with filters).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance: 'Confirm with the user before calling', which is a critical when-to-use directive for a destructive operation. It also implies usage context by mentioning optional filtering parameters, though it doesn't name specific alternatives among siblings.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
estimate_file_costBRead-onlyIdempotentInspect
Estimate the print cost of a file, optionally for a specific printer.
| Name | Required | Description | Default |
|---|---|---|---|
| item_id | No | ||
| mms_map | No | ||
| newFile | No | ||
| analysis | No | ||
| printer_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, indicating a safe, non-destructive, repeatable operation with deterministic results. The description adds that it 'estimates' cost, which aligns with read-only behavior but does not disclose additional traits like rate limits, authentication needs, or what 'estimate' entails (e.g., accuracy, assumptions). No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes optional functionality. There is no wasted language, making it appropriately sized and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with 0% schema coverage, no output schema, and annotations covering safety but not operational context, the description is incomplete. It lacks details on parameter meanings, return values, error conditions, or how the estimation works, leaving significant gaps for a tool with multiple inputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It only mentions 'printer_id' optionally, ignoring 'item_id', 'mms_map', 'newFile', and 'analysis'. This leaves most parameters unexplained, failing to add meaningful semantics beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Estimate the print cost of a file, optionally for a specific printer.' It specifies the verb ('estimate'), resource ('print cost of a file'), and includes an optional scope ('for a specific printer'). However, it does not explicitly differentiate from sibling tools like 'get_queue_item_cost', which might serve a similar but distinct purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance: it mentions the optional 'printer_id' parameter but does not explain when to use this tool versus alternatives (e.g., 'get_queue_item_cost' for queue-related costs) or any prerequisites. There is no explicit when/when-not context or named alternatives, leaving usage ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_account_statisticsBRead-onlyIdempotentInspect
Account-level print statistics (success rate, filament used, print time, cost) with optional date range and user/printer filters.
| Name | Required | Description | Default |
|---|---|---|---|
| users | No | ||
| general | No | ||
| end_date | No | ||
| printers | No | ||
| fake_data | No | ||
| start_date | No | ||
| printer_models | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds context about filtering capabilities (date range, user/printer filters) but doesn't disclose behavioral traits like rate limits, authentication needs, or response format, offering moderate value beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose (account-level statistics) and key filters, with no wasted words. It appropriately sized for the tool's complexity, making every part of the sentence earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (7 parameters, no output schema), annotations cover safety and idempotency, but the description lacks details on parameter usage, response format, and behavioral context. It's minimally adequate but has clear gaps, such as not explaining what the statistics represent or how filters interact.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 7 parameters, the description only mentions date range and user/printer filters, covering a subset (e.g., start_date, end_date, users, printers). It omits details on parameters like general, fake_data, and printer_models, failing to compensate for the low schema coverage and leaving most parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves account-level print statistics (success rate, filament used, print time, cost) with optional filters, specifying both the resource (account statistics) and key metrics. It distinguishes from siblings by focusing on aggregated statistics rather than individual items like printers or jobs, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for retrieving aggregated statistics with optional filtering by date, users, or printers, but provides no explicit guidance on when to use this versus other tools (e.g., get_print_job for individual details) or any prerequisites. Context is clear but lacks sibling differentiation or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_farm_overviewARead-onlyIdempotentInspect
One-shot summary of farm-wide printer state. Use this (NOT list_printers) when the user asks "how many printers are printing/idle/awaiting bed clear/etc." or "what is the state of the farm". Returns a total and {count, printers:[{id,name}]} for each bucket: online, offline, not_connected, operational (idle), printing, paused, awaiting_bed_clear (a print finished but the bed has not been cleared yet — printer is online + operational + still has a job; this is NOT print_pending), in_maintenance, print_pending (a queued staggered/scheduled start), requires_attention (has unresolved error notifications), ai_running, ai_detected_low, ai_detected_high. Counts overlap intentionally: a printer can be in "online" + "printing" + "ai_running" at once.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, idempotent, non-destructive behavior. The description adds value by explaining overlapping counts and clarifying states like 'awaiting_bed_clear', which goes beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single focused paragraph with front-loaded purpose. All sentences contribute meaning, though it could be slightly more structured with bullet points.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Despite no output schema, the description details the return structure (total and buckets with counts and printer arrays), providing sufficient completeness for an agent to understand the output.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
No parameters exist, so the description does not need to add parameter semantics. Baseline score of 4 is appropriate as it handles the absence well.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states 'One-shot summary of farm-wide printer state' and explicitly distinguishes from sibling 'list_printers' by specifying use cases like counting printers in various states.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
Provides explicit guidance on when to use this tool over 'list_printers' and lists the state buckets, but does not detail exclusions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_filamentBRead-onlyIdempotentInspect
Get details of one specific filament spool. Use this (NOT list_filaments+grep) when the user names a spool by id or short id. Accepts spool_id (the integer DB id) or spool_short_id (the 4-character code shown on QR labels and the spool view page, e.g. "T2SO", "M0WT").
| Name | Required | Description | Default |
|---|---|---|---|
| public | No | ||
| spool_id | No | ||
| company_id | No | ||
| locationscount | No | ||
| spool_short_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key traits (read-only, non-destructive, idempotent, closed-world), so the description adds minimal value. It doesn't disclose additional behaviors like error conditions, authentication needs, or rate limits, but doesn't contradict annotations either.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary details. Every word serves a purpose, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 undocumented parameters, no output schema, and no sibling differentiation, the description is insufficient. It doesn't clarify return values, parameter interactions, or how to identify a 'specific filament spool', leaving significant gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 5 parameters, the description fails to compensate by explaining what parameters like 'public', 'locationscount', or 'item_id' mean or how they affect the query. It only mentions 'specific filament spool' vaguely, leaving semantics unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Get details') and resource ('specific filament spool'), making the purpose understandable. However, it doesn't differentiate from sibling tools like 'list_filaments' or 'get_filament_history', which would require explicit comparison to earn a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'list_filaments' for multiple spools or 'get_filament_history' for historical data. It lacks context about prerequisites or exclusions, leaving the agent to infer usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_filament_historyBRead-onlyIdempotentInspect
Retrieve the usage history of a filament spool.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| public | No | ||
| item_id | No | ||
| perPage | No | ||
| user_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the description doesn't need to repeat these. It adds minimal context by specifying 'usage history,' but doesn't detail aspects like pagination behavior, error handling, or response format, which could be useful given the lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words, making it highly concise and front-loaded. It efficiently conveys the core purpose without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with no schema descriptions, no output schema, and annotations covering safety but not operational details, the description is insufficient. It doesn't address how to use parameters, what the history includes, or the return format, leaving significant gaps for an AI agent to invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 5 parameters, the description provides no information about parameters like 'page', 'public', 'item_id', 'perPage', or 'user_id'. It doesn't explain what these mean or how they affect the retrieval, failing to compensate for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Retrieve') and resource ('usage history of a filament spool'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_filament' or 'list_filaments', which might also retrieve filament-related data, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'get_filament' for current filament details or 'list_filaments' for a list of filaments. There's no mention of prerequisites, context, or exclusions, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_folderBRead-onlyIdempotentInspect
Get a folder's details including permissions.
| Name | Required | Description | Default |
|---|---|---|---|
| item_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true, covering safety and idempotency. The description adds value by specifying what details are retrieved ('including permissions'), which isn't in the annotations. However, it doesn't disclose other behavioral traits like error handling, rate limits, or authentication needs, leaving some gaps despite the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's function without unnecessary words. It's front-loaded with the core action ('Get a folder's details') and adds a useful detail ('including permissions') concisely. Every part earns its place, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (1 parameter, no output schema) and rich annotations, the description is somewhat complete but has gaps. It covers what the tool does but lacks usage guidelines, parameter explanations, and output details. With annotations handling safety, it's minimally adequate but not fully helpful for an agent to use correctly in all contexts.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 1 parameter (item_id) with 0% description coverage, so the schema provides minimal semantic information. The description doesn't add any parameter-specific details (e.g., what item_id represents, format, or examples). Since there's only one parameter, the baseline is higher, but the description fails to compensate for the low schema coverage, resulting in a mediocre score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'Get' and the resource 'folder's details including permissions', making the purpose specific and understandable. However, it doesn't explicitly differentiate from siblings like 'list_files' or 'list_folders' (if such a tool existed), which would require mentioning it retrieves a single folder by ID rather than listing multiple folders.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a folder ID), exclusions (e.g., not for listing folders), or direct siblings like 'create_folder' or 'move_folder' for related operations. This leaves the agent to infer usage from the name and schema alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_maintenance_dashboardBRead-onlyIdempotentInspect
Overview of maintenance jobs, problems, inventory, and printer maintenance status.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds minimal behavioral context beyond this, such as implying a dashboard-style summary, but doesn't detail response format, data freshness, or access requirements.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the key information ('overview of maintenance...'). It avoids redundancy and wastes no words, though it could be slightly more structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a dashboard with multiple data types), lack of output schema, and rich annotations, the description is adequate but incomplete. It specifies what data is included but not the format, scope, or limitations, leaving gaps for the agent to navigate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the schema fully documents the lack of inputs. The description doesn't need to add parameter details, and it appropriately avoids mentioning any, earning a baseline score of 4 for not introducing confusion.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool provides an 'overview of maintenance jobs, problems, inventory, and printer maintenance status,' which is a specific verb+resource combination. However, it doesn't explicitly distinguish itself from sibling tools like 'get_printer' or 'list_printers,' which might provide related but different information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, exclusions, or specific contexts for usage, leaving the agent to infer based on the purpose alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_next_queue_itemBRead-onlyIdempotentInspect
Find the best-matching queue item(s) for a set of printers using SimplyPrint's compatibility matcher.
| Name | Required | Description | Default |
|---|---|---|---|
| compact | No | Keep at default true. Returns minimal data tuned for AI context. Only set to false if you specifically need fields the compact view drops, and then pair with limit to keep the response small. | |
| filters | No | ||
| sorting | No | ||
| settings | No | ||
| deselects | No | ||
| printer_id | Yes | The printer IDs (comma-separated for multiple) | |
| queueGroupsOrder | No | ||
| skippedQueueItems | No | ||
| specificQueueGroups | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key traits (read-only, non-destructive, idempotent, closed-world), so the description doesn't need to repeat these. It adds value by mentioning 'best-matching' and 'compatibility matcher', hinting at algorithmic behavior, but doesn't detail how matches are determined, error handling, or output format. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without fluff. Every word contributes directly to explaining the tool's function, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, low schema coverage, no output schema) and rich sibling set, the description is inadequate. It omits parameter explanations, output details, and usage context, leaving significant gaps for an agent to invoke it correctly despite good conciseness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is low (13%), with only 'printer_id' having a minimal description. The description doesn't explain any parameters, leaving 7 out of 8 parameters (e.g., filters, sorting, settings) undocumented in both schema and description. It fails to compensate for the poor schema coverage, offering no semantic context for inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('find') and target ('best-matching queue item(s) for a set of printers'), specifying it uses SimplyPrint's compatibility matcher. It distinguishes itself from generic queue tools like 'get_queue_item' by focusing on matching, though it doesn't explicitly differentiate from similar tools like 'match_file_to_printers' or 'inspect_printer_queue'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance, implying usage when needing to match printers to queue items. However, it lacks explicit when-to-use instructions, prerequisites, or comparisons to alternatives like 'match_file_to_printers' (which might handle file matching) or 'get_queue_item' (which retrieves specific items). No exclusions or contextual boundaries are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_next_queue_items_for_printersARead-onlyIdempotentInspect
Read-only preview: for each given printer, return the next queue item that would be started. Uses the same dedup matcher as create_print_job with next_queue_item=true, so the same queue item is never returned twice across printers in one call. Includes match failures per printer (issues) so you can explain why a printer has nothing to print. Does NOT start any job.
| Name | Required | Description | Default |
|---|---|---|---|
| compact | No | Keep at default true. Returns minimal data tuned for AI context. Only set to false if you specifically need fields the compact view drops, and then pair with limit to keep the response small. | |
| printer_id | Yes | Comma-separated printer id(s). Does not need to be operational — offline printers return their would-be match too, so you can see what's waiting for them. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint, idempotentHint, and destructiveHint=false. The description adds critical details: dedup matcher, never returns same item twice, includes issues for failures, and explicitly states no job is started. No contradictions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Three dense sentences front-load the key purpose, then add dedup and failure details. No wasted words; every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
No output schema, but description more than compensates by explaining what is returned (next queue item, issues). Covers functionality completely for a preview tool with good annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Input schema has 100% coverage with descriptions. The description adds extra context: compact parameter explains when to set it to false, printer_id clarifies offline printers return would-be matches. This enriches the schema but does not rely on missing coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states it is a read-only preview that returns the next queue item for each printer. It distinguishes from siblings like get_next_queue_item and create_print_job by emphasizing it does not start jobs and uses dedup.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly says when to use (to preview next items) and contrasts with tools that start jobs. It also explains the dedup behavior and match failures, providing clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_printerBRead-onlyIdempotentInspect
Get details, status, temps, current job, filament, notifications for ONE specific printer by id. Use this (not list_printers) whenever the user asks about a single printer they've already identified — "status of printer X", "what's printer 5 doing", "temperature on Creality K2". Cheaper and less noisy than listing all printers.
| Name | Required | Description | Default |
|---|---|---|---|
| compact | No | Keep at default true. Returns minimal data tuned for AI context. Only set to false if you specifically need fields the compact view drops, and then pair with limit to keep the response small. | |
| printer_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the description's bar is lower. It adds minimal context by implying it fetches 'detailed information,' but doesn't elaborate on what details are included, error handling, or performance aspects. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary words. It avoids redundancy and wastes no space, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter, no output schema) and rich annotations, the description is minimally adequate. However, it lacks details on return values (e.g., what 'detailed information' includes) and doesn't address potential errors or usage context, leaving gaps that could hinder agent effectiveness despite the annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with the 'compact' parameter well-documented in the schema. The description doesn't add any parameter-specific semantics beyond what the schema provides, such as explaining the trade-offs between compact and full details. Baseline 3 is appropriate given the schema handles parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('detailed information about a specific printer'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'list_printers' or 'get_print_job', which would require mentioning it retrieves details for a single identified printer rather than listing multiple or focusing on print jobs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites like needing a printer_id, compare it to 'list_printers' for multiple printers, or specify use cases like checking printer status before operations. This leaves the agent without contextual usage cues.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_print_jobBRead-onlyIdempotentInspect
Get detailed info for a specific print job.
| Name | Required | Description | Default |
|---|---|---|---|
| extra | No | ||
| item_id | Yes | ||
| getcustomfields | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key traits like read-only, non-destructive, and idempotent, so the description doesn't need to repeat these. It adds value by specifying 'detailed info' and 'specific print job', implying focused retrieval, but doesn't disclose rate limits, auth needs, or response format. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('Get detailed info') without unnecessary words. Every part earns its place by conveying essential purpose, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters (0% schema coverage), no output schema, and no behavioral details beyond annotations, the description is incomplete. It doesn't clarify parameter roles, return values, or error conditions, leaving significant gaps for an AI agent to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 3 parameters, the description fails to compensate by explaining what 'item_id', 'extra', or 'getcustomfields' mean or how they affect the output. It mentions 'specific print job' which hints at 'item_id', but this is insufficient given the schema's lack of documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get') and resource ('detailed info for a specific print job'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_queue_item' or 'list_print_jobs', which might retrieve similar information but with different scopes or formats.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives such as 'list_print_jobs' for multiple jobs or 'get_queue_item' for queue-specific details. It lacks context about prerequisites or exclusions, leaving usage decisions ambiguous.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_queue_itemBRead-onlyIdempotentInspect
Get details of a specific queue item
| Name | Required | Description | Default |
|---|---|---|---|
| item_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and idempotent behavior, so the description doesn't need to repeat these. It adds value by specifying 'details of a specific queue item', which clarifies the scope beyond the schema. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It's front-loaded and appropriately sized for the tool's purpose, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given annotations cover safety and behavior, and there's no output schema, the description is adequate but incomplete. It doesn't explain return values or error conditions, leaving gaps for an AI agent to infer. It meets minimum viability with clear room for improvement.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 1 parameter ('item_id'), the description doesn't add meaning beyond the schema. It mentions 'specific queue item' which aligns with the parameter but lacks details like format or constraints. Baseline is 3 due to minimal parameter burden.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get details of a specific queue item' clearly states the verb ('Get') and resource ('queue item'), but it's vague about what 'details' entail and doesn't differentiate from siblings like 'get_next_queue_item' or 'get_queue_item_cost'. It avoids tautology but lacks specificity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives such as 'get_next_queue_item' or 'list_queue'. The description implies usage for a specific item but doesn't mention prerequisites, exclusions, or context for selection among similar tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_queue_item_costARead-onlyIdempotentInspect
Calculate the estimated cost of a queue item, optionally for a specific printer.
| Name | Required | Description | Default |
|---|---|---|---|
| item_id | No | ||
| mms_map | No | ||
| printer_id | No | ||
| queue_item_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide strong hints: readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, indicating this is a safe, non-destructive, repeatable read operation with deterministic outputs. The description adds value by specifying it's for 'estimated cost' and includes an optional printer parameter, which clarifies the tool's behavior beyond annotations. No contradictions with annotations are present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Calculate the estimated cost of a queue item') and adds optional detail ('optionally for a specific printer'). There is no wasted language, and it's appropriately sized for the tool's complexity, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (4 parameters, no output schema), the description is partially complete. It clarifies the tool's purpose and optional printer use, but lacks details on parameter meanings, return values, or error handling. Annotations cover safety and idempotency, but the description doesn't fully address gaps in parameter documentation or output expectations, making it adequate but with notable omissions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 4 parameters with 0% description coverage, meaning none are documented in the schema. The description only mentions 'queue item' and 'printer' as concepts, but doesn't explain parameters like 'queue_item_id', 'item_id', 'mms_map', or 'printer_id', their relationships, or required usage. It fails to compensate for the low schema coverage, leaving key parameters unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Calculate the estimated cost of a queue item, optionally for a specific printer.' It specifies the verb ('calculate'), resource ('queue item'), and optional scope ('printer'), making it easy to understand. However, it doesn't explicitly distinguish it from the sibling tool 'estimate_file_cost', which might be a similar cost-estimation tool, so it misses full sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning 'optionally for a specific printer,' suggesting it can be used with or without printer-specific details. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like 'estimate_file_cost' or other cost-related tools, nor does it mention prerequisites or exclusions, leaving the agent to infer usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_referral_statsBRead-onlyIdempotentInspect
Referral program statistics: conversions, earnings, pending payout.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide key behavioral hints (readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false), so the description doesn't need to repeat these. It adds context by specifying the types of statistics returned (conversions, earnings, pending payout), which is useful but not extensive behavioral detail. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose ('Referral program statistics') and lists key metrics without any wasted words. It's appropriately sized for a simple read-only tool with no parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, read-only, no output schema), the description is adequate but has gaps. It specifies what statistics are retrieved, but without an output schema, it doesn't detail the return format (e.g., structure, units, time range). Annotations cover safety and behavior, but more context on data scope or limitations would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the schema fully documents the lack of inputs. The description doesn't need to add parameter information, so it meets the baseline of 4 for tools with no parameters, as it doesn't have to compensate for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves 'referral program statistics' with specific metrics (conversions, earnings, pending payout), providing a specific verb+resource. However, it doesn't distinguish from siblings like 'get_account_statistics' or other get_* tools, which would require explicit differentiation for a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., 'get_account_statistics' or other sibling tools), nor does it mention prerequisites or exclusions. Usage is implied by the name and description but not explicitly stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_slicer_settings_for_fileBRead-onlyIdempotentInspect
Retrieve slicer settings stored for a file.
| Name | Required | Description | Default |
|---|---|---|---|
| file | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds no behavioral context beyond this, such as rate limits or authentication needs, but doesn't contradict annotations, so it meets the lower bar with annotations present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words, front-loading the core action and resource efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the simple input schema (1 parameter) and rich annotations, the description is minimally adequate. However, with no output schema and 0% schema coverage, it should ideally add more parameter context or mention return values to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description doesn't explain the 'file' parameter beyond implying it's needed to retrieve settings. It lacks details like format, constraints, or examples, failing to compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Retrieve') and resource ('slicer settings stored for a file'), making the purpose unambiguous. However, it doesn't differentiate from potential siblings like 'get_file' or 'get_settings', though none exist in the provided sibling list specifically for slicer settings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context, or exclusions, leaving the agent to infer usage based solely on the name and parameters.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
get_webhook_sampleARead-onlyIdempotentInspect
Get one or more realistic sample webhook payloads for a specific event type, matching the envelope a live webhook delivery would produce. Used by integration platforms (Activepieces, n8n, Zapier) to show the payload shape before any real event fires. Falls back to synthetic data when no applicable entity exists in the account.
| Name | Required | Description | Default |
|---|---|---|---|
| event | Yes | ||
| limit | No | ||
| version | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the description primarily adds value by noting the fallback to synthetic data when no applicable entity exists. This additional behavior is useful but does not contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise with two sentences, no fluff, and the main purpose is front-loaded. Every sentence adds value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and high schema coverage (0%), the description should provide more context about the payload structure or parameter details. It covers fallback behavior but misses important information about what the payload contains and how parameters like 'limit' and 'version' work. Minimum viable but has clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, requiring the description to explain parameters. The description mentions 'event type' but does not detail the 'event', 'limit', or 'version' parameters. 'Limit' is vaguely implied by 'one or more', and 'version' is not addressed. This leaves significant gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool retrieves realistic sample webhook payloads for a specific event type, matching live delivery format. It also mentions use cases for integration platforms. The name 'get_webhook_sample' is self-explanatory, and no sibling tool performs a similar function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explains the tool is used by integration platforms to show payload shape before real events fire, and falls back to synthetic data, giving clear context on when to use. However, it does not explicitly state when not to use or mention alternative tools, though no similar siblings exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
home_printerBInspect
Home the printer axes (move to origin position)
| Name | Required | Description | Default |
|---|---|---|---|
| axes | No | Axes to home, space-separated. Examples: "X Y Z" (all), "X Y" (XY only), "Z" (Z only) | X Y Z |
| printer_id | Yes | The printer IDs (comma-separated for multiple) | |
| snippet_id | No | ID of a gcode snippet to send. Either gcode, macro, or snippet_id is required. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=false, destructiveHint=false, idempotentHint=false, and openWorldHint=false, covering basic safety and behavior. The description adds that this is a movement operation to origin, which provides useful context beyond annotations. However, it doesn't mention potential side effects (e.g., printer noise, time required) or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that states the action and outcome without unnecessary words. It's appropriately sized for a simple mechanical operation and front-loads the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a printer control tool with no output schema and moderate complexity (physical movement operation), the description is minimal but adequate given good annotations. It covers the basic purpose but lacks details about execution timing, success indicators, or error handling that would be helpful for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema fully documents all three parameters. The description doesn't add any parameter semantics beyond what's in the schema (e.g., doesn't explain what 'home' means for different axes or why snippet_id might be needed). Baseline 3 is appropriate when schema does the heavy lifting.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('home the printer axes') and the outcome ('move to origin position'), providing a specific verb+resource combination. However, it doesn't differentiate from sibling tools like 'move_printer_axis' or 'send_gcode', which might involve similar printer control operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., printer must be idle), exclusions (e.g., don't use during printing), or compare to similar tools like 'move_printer_axis' for manual movement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
inspect_printer_queueARead-onlyIdempotentInspect
Show which queue items match/miss a specific printer and why. Useful for diagnosing why a queue item will not print.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | ||
| printer_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits: readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds useful context by explaining that it shows matching/mismatching items and reasons, which helps in diagnosis, but doesn't disclose additional details like rate limits, authentication needs, or response format. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose in the first sentence and adds a concise second sentence for usage context. Every sentence earns its place by providing essential information without redundancy, making it efficiently structured and appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (2 parameters, no output schema), the description is reasonably complete: it clarifies the diagnostic purpose and usage context. However, with 0% schema description coverage and no output schema, it could benefit from more details on parameter usage or expected output format, though annotations provide good behavioral coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description mentions 'specific printer', which aligns with the required 'printer_id' parameter, and implies diagnostic output but doesn't detail the optional 'mode' parameter or its 'autoprint' enum. It adds some meaning but doesn't fully compensate for the lack of schema descriptions, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('show', 'match/miss', 'diagnosing') and resources ('queue items', 'printer'), and distinguishes it from siblings like 'list_queue' or 'get_queue_item' by focusing on diagnostic matching rather than general listing or retrieval. It explicitly mentions the diagnostic use case for understanding why items won't print.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('Useful for diagnosing why a queue item will not print'), which implicitly suggests it's for troubleshooting rather than routine queue management. However, it doesn't explicitly state when not to use it or name specific alternatives among the many sibling tools, such as 'list_queue' for general listing or 'get_queue_item' for individual item details.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_brand_filamentsBRead-onlyIdempotentInspect
List filament products from a specific brand.
| Name | Required | Description | Default |
|---|---|---|---|
| item_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the description's burden is lower. It adds value by specifying the scope ('from a specific brand'), which isn't captured in annotations, but doesn't detail aspects like pagination, rate limits, or error handling. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no wasted words, front-loading the core purpose efficiently. It's appropriately sized for a simple listing tool.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (one parameter) and rich annotations, the description is minimally adequate but incomplete. It lacks output details (no schema provided), parameter explanations, and usage context, relying heavily on annotations for behavioral understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for the undocumented parameter 'item_id'. It fails to explain what 'item_id' represents (e.g., brand identifier, product ID) or its format, leaving the parameter's meaning unclear beyond the schema's type definition.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('filament products from a specific brand'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from the sibling tool 'list_filaments' (which might list all filaments vs. brand-specific ones), preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'list_filaments' or 'get_filament', nor does it mention prerequisites or context for usage. It merely restates the purpose without operational context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_custom_fieldsBRead-onlyIdempotentInspect
List custom field definitions with pagination and search.
| Name | Required | Description | Default |
|---|---|---|---|
| page | Yes | ||
| search | No | ||
| sort_id | No | ||
| end_date | No | ||
| sort_dir | No | ||
| page_size | Yes | ||
| start_date | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide readOnlyHint=true, destructiveHint=false, and idempotentHint=true, so the agent knows this is a safe, repeatable read operation. The description adds 'with pagination and search' which provides useful behavioral context about how results are returned and filtered. However, it doesn't mention rate limits, authentication needs, or what format the results take.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('List custom field definitions') and adds two key behavioral aspects ('with pagination and search'). Every word earns its place with zero waste or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only listing tool with good annotations (readOnlyHint, idempotentHint), the description provides the basic purpose and mentions pagination/search behavior. However, with 7 parameters (2 required) and 0% schema coverage, the parameter documentation is inadequate. No output schema exists, so the description should ideally mention what gets returned, but doesn't.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 7 parameters, the description carries significant burden but only mentions 'pagination and search' - covering just 2 of 7 parameters (page/page_size and search). It doesn't explain sort_id, sort_dir, start_date, end_date, or the relationships between parameters. The description adds minimal value beyond what the bare schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('custom field definitions'), making the purpose immediately understandable. It distinguishes from sibling 'list_custom_fields_for' by not specifying a target object, but doesn't explicitly contrast them. The description is specific enough to understand what the tool does without being tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'pagination and search' which implies some usage context, but provides no explicit guidance on when to use this tool versus alternatives like 'list_custom_fields_for' or other listing tools. There's no mention of prerequisites, appropriate scenarios, or when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_custom_fields_forARead-onlyIdempotentInspect
List custom field definitions for a specific entity category and optional sub-category (e.g. PRINT + PRINT_QUEUE for queue-item custom fields).
| Name | Required | Description | Default |
|---|---|---|---|
| category | Yes | ||
| subCategory | No | ||
| includeDisabled | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide clear behavioral hints (readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false), covering safety and idempotency. The description adds minimal context by specifying filtering capabilities, but does not disclose additional traits like rate limits, auth needs, or return format. It does not contradict annotations, so it earns a baseline score for adding some value beyond structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose and includes a helpful example in parentheses. Every word contributes to understanding without waste, making it appropriately sized and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters with enums, no output schema) and rich annotations, the description is adequate but has gaps. It covers the filtering purpose and provides an example, but lacks details on return values, error conditions, or prerequisites. With annotations handling safety and idempotency, the description is minimally complete but could be more informative for agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but all parameters have enums that define allowed values. The description adds meaning by explaining that 'category' and 'subCategory' filter entity types, with an example, and implies 'includeDisabled' controls visibility of disabled fields. However, it does not fully compensate for the lack of schema descriptions, as parameter purposes are only partially clarified without detailed semantics or usage notes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('custom field definitions') with specific targeting ('for a specific entity category and optional sub-category'). It distinguishes from the sibling 'list_custom_fields' by specifying filtering capabilities, though not explicitly naming the alternative. The purpose is unambiguous but could be more explicit about the sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by mentioning entity categories and sub-categories with an example ('PRINT + PRINT_QUEUE'), but does not explicitly state when to use this tool versus alternatives like 'list_custom_fields' or other listing tools. It provides some guidance through the example but lacks explicit when/when-not instructions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_filament_colorsARead-onlyIdempotentInspect
List available filament colors (for UI pickers or picking similar spools).
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate read-only, non-destructive, and idempotent behavior, which the description doesn't contradict. The description adds value by specifying the use case ('UI pickers or picking similar spools'), providing practical context beyond the annotations, though it doesn't detail rate limits or auth needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the purpose and adds useful context without any wasted words. Every part earns its place, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's simplicity (0 parameters, annotations cover safety), the description is complete enough for its purpose. It lacks an output schema, but for a list tool with clear annotations, the description's context on use cases suffices, though more detail on return format could enhance it.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the baseline is 4 as there are no parameters to document. The description doesn't need to add parameter details, so it meets expectations without compensation needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('available filament colors'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'list_filaments' or 'list_brand_filaments', which might list different filament attributes, so it misses full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('for UI pickers or picking similar spools'), which helps guide usage. It doesn't specify when not to use it or name explicit alternatives among siblings, but the context is sufficient for informed selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_filamentsARead-onlyIdempotentInspect
List filament spools with optional filters and sorting. Accounts often have hundreds of spools, so ALWAYS pair sort_by with a limit. Use sort_by=last_used + limit=10 for "most used / most popular spool" questions. Use sort_by=created + limit=N for recently added. Use sort_by=left for emptiest/fullest. Filters (material_type/brand/color) are case-insensitive substring matches. If the user names a specific spool by id or 4-character short id (e.g. "T2SO"), call get_filament instead — do NOT list and grep.
| Name | Required | Description | Default |
|---|---|---|---|
| brand | No | Brand name substring | |
| color | No | Color substring — matches hex, name, or color group | |
| empty | No | true = only empty spools; false = only non-empty spools | |
| limit | No | Cap on number of spools returned after sorting/filtering. Use this with sort_by to grab e.g. the 3 newest. | |
| compact | No | Keep at default true. Returns minimal data tuned for AI context. Only set to false if you specifically need fields the compact view drops, and then pair with limit to keep the response small. | |
| sort_by | No | Field to sort by. Defaults to created (newest first). | |
| assigned | No | true = only spools assigned to a printer; false = only unassigned spools | |
| sort_dir | No | Sort direction. Defaults to desc. | |
| printer_id | No | Only spools assigned to this printer id | |
| material_type | No | Material type substring (e.g. "PLA", "PETG") |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover read-only, non-destructive, and idempotent behavior, but the description adds valuable context: it specifies that the listing includes 'status and remaining weight' information, which isn't obvious from the tool name or annotations. This enhances understanding of what data is returned, though it doesn't mention pagination or sorting behavior.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('List the user's filament spools') and adds key details ('with status and remaining weight'). There is zero waste or unnecessary elaboration, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only, parameterless tool with good annotations, the description is reasonably complete. It clarifies the scope (user's spools) and output attributes (status, weight), though it doesn't specify if the list is filtered or paginated. Without an output schema, more detail on return format could help, but it's adequate given the low complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, focusing instead on the output semantics. A baseline of 4 is applied since it avoids redundancy while being complete for a parameterless tool.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('user's filament spools') with specific attributes ('status and remaining weight'). It distinguishes from some siblings like 'get_filament' (singular) and 'list_filament_colors', but doesn't explicitly differentiate from 'list_brand_filaments' or other list tools. The purpose is specific but sibling differentiation is incomplete.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'list_brand_filaments' or 'get_filament'. The description implies it's for viewing spool inventory, but lacks explicit context, prerequisites, or exclusions. Usage is implied rather than stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_filesBRead-onlyIdempotentInspect
List files and folders in the user's SimplyPrint storage. With pid set, only returns files compatible with that printer (full compatibility check). With global_search=true (default), searches recursively across all folders.
| Name | Required | Description | Default |
|---|---|---|---|
| f | No | Folder id to list (0 or omitted = root) | |
| limit | No | Cap on returned files (1-500). Useful with sort_by for "5 largest files" or "10 most recent". | |
| search | No | Filename substring search. Matches with or without extension — "Schleife oben 1_PLA_1h4m" and "Schleife oben 1_PLA_1h4m.3mf" both find the same file. | |
| compact | No | Keep at default true. Returns minimal data tuned for AI context. Only set to false if you specifically need fields the compact view drops, and then pair with limit to keep the response small. | |
| sort_by | No | Sort files by this field (folders are not affected). Default: repository order (user's saved files-page sort). | |
| sort_dir | No | Sort direction; default desc (newest/largest first). | |
| has_gcode | No | true = only files with a parsed G-code analysis (printable); false = only non-printable files (models, etc.) | |
| printer_id | No | Only return files compatible with this printer (full compatibility check, not just model) | |
| global_search | No | If true (default), search recursively across all folders instead of only the current one |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already provide key behavioral hints: readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description doesn't add significant behavioral context beyond what annotations cover, such as rate limits, authentication needs, or pagination behavior, but it doesn't contradict annotations either.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's front-loaded with the core action and resource, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 undocumented parameters, no output schema, and annotations that cover basic safety but not operational details, the description is insufficient. It doesn't explain what the parameters do, what the return values look like, or how to handle the listing (e.g., filtering, pagination), leaving significant gaps for an agent to use it effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 4 parameters (f, search, printer_id, global_search) are documented in the schema. The description provides no information about these parameters, failing to compensate for the lack of schema documentation, which leaves their purpose and usage unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List') and resource ('files and folders in the user's SimplyPrint storage'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'list_printers' or 'list_queue', which also list resources but different types.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when to use it over other listing tools (e.g., 'list_printers' for printers) or how it relates to file management tools like 'get_folder' or 'move_file', leaving the agent to infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_pending_queue_itemsBRead-onlyIdempotentInspect
List queue items pending approval (status: PENDING, DENIED, or REVISION).
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| status | No | ||
| per_page | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already cover key behavioral traits (readOnlyHint: true, destructiveHint: false, idempotentHint: true, openWorldHint: false), so the agent knows this is a safe, non-destructive read operation. The description adds value by specifying the status filter criteria ('PENDING, DENIED, or REVISION'), which isn't in the annotations, but it doesn't disclose other behaviors like pagination details, rate limits, or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero waste—it directly states the tool's purpose and scope without unnecessary words. It's front-loaded with the core action and criteria, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a list operation with 3 parameters), lack of output schema, and rich annotations, the description is incomplete. It doesn't explain return values, pagination behavior, or how parameters interact (e.g., if 'status' defaults to all listed statuses). While annotations cover safety, the description fails to provide sufficient operational context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description doesn't mention any parameters, leaving all three (page, status, per_page) undocumented. However, the description implies a status filter through 'status: PENDING, DENIED, or REVISION', which loosely relates to the 'status' parameter, adding minimal semantic context but not compensating fully for the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('queue items pending approval') with specific status criteria ('PENDING, DENIED, or REVISION'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'list_queue' or 'inspect_printer_queue', which might also list queue items but with different scopes or filters.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'list_queue' or 'get_next_queue_item', nor does it mention prerequisites, exclusions, or specific contexts. It only defines the scope of items listed, leaving usage decisions to inference.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_printersBRead-onlyIdempotentInspect
List MULTIPLE printers with optional filters. For details on a single known printer, call get_printer instead — do not list-then-filter. For farm-wide counts ("how many printers are doing X"), call get_farm_overview instead — do not paginate this. Use status=["online"] to only see reachable printers, status=["printing"] for ones actively printing, status=["was_printing_when_offline"] for printers that dropped mid-print, status=["idle"] for available printers ready to accept a job, status=["awaiting_bed_clear"] for printers whose last print finished but the bed has not been cleared yet (NOT print_pending, which means a queued staggered start).
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | Page number (1-based) when paginating beyond limit/page_size. | |
| tags | No | Filter printers that have any of the given tag names | |
| limit | No | Cap on results (1-100). Alias for page_size. Use this with sort_by for "top N" queries. | |
| search | No | ||
| status | No | Filter by status. Multiple values are OR-matched. Macros (derived): online, offline, idle, was_printing_when_offline, in_maintenance, print_pending, can_accept_commands, awaiting_bed_clear. Raw PrinterStatus: printing, operational, paused, pausing, resuming, cancelling, error, downloading, unknown. Note: awaiting_bed_clear means a print finished but the bed has not been cleared yet (online + operational + still has a job) — use this NOT print_pending (which means a queued/staggered start) when the user asks "which printers need to be cleared off?". | |
| compact | No | Keep at default true. Returns minimal data tuned for AI context. Only set to false if you specifically need fields the compact view drops, and then pair with limit to keep the response small. | |
| sort_by | No | Sort by this field. Default: stable (no sort). | |
| sort_id | No | ||
| group_id | No | Only return printers in this printer group | |
| order_by | No | ||
| sort_dir | No | Sort direction; default asc when sort_by is set. | |
| page_size | No | ||
| job_columns | No | ||
| out_of_order | No | If true, only out-of-order printers; if false, only in-order printers (enterprise feature) | |
| printer_columns | No | ||
| filament_columns | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds context about including 'current status' in the output, which is useful behavioral information not in the annotations. However, it doesn't mention pagination behavior, rate limits, or authentication requirements, leaving gaps in behavioral understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with zero wasted words. It's front-loaded with the core purpose and efficiently conveys the essential information without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (11 parameters, low schema coverage, no output schema), the description is insufficient. It lacks parameter explanations, usage guidelines, output format details, and behavioral nuances like pagination or error handling. The annotations help with safety but don't compensate for the missing contextual information needed for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is only 9% (1 out of 11 parameters has a description), so the description must compensate but fails to do so. It mentions no parameters at all, leaving all 11 parameters undocumented in terms of their purpose, relationships, or usage. This is a significant gap given the high parameter count and low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb 'List' and resource 'all printers in your organization' with the scope 'with their current status'. It distinguishes from sibling tools like 'get_printer' which retrieves a single printer, but doesn't explicitly mention this distinction. The purpose is specific and actionable.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'get_printer' (for single printer details) or 'list_print_jobs' (for printer activity). The description implies a general listing function but offers no context about prerequisites, limitations, or comparison to sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_print_jobsBRead-onlyIdempotentInspect
List past print jobs with pagination, filters, and sort. Compact mode (default) drops gcodeAnalysis, filament breakdowns, and panel-only flags.
| Name | Required | Description | Default |
|---|---|---|---|
| page | Yes | Page number (1-based) | |
| limit | No | Cap on results per page (1-100). Alias for page_size. | |
| search | No | Filename / UID / custom-field substring search | |
| compact | No | Keep at default true. Returns minimal data tuned for AI context. Only set to false if you specifically need fields the compact view drops, and then pair with limit to keep the response small. | |
| sort_by | No | Sort by this field. Default: started date desc (newest first). | |
| sort_id | No | ||
| end_date | No | Latest start (ISO-8601) | |
| sort_dir | No | Sort direction; default desc. | |
| spool_id | No | Only jobs that used this filament spool | |
| user_ids | No | Only jobs started by these users (requires VIEW_ALL_PRINT_HISTORY) | |
| page_size | Yes | Items per page (1-100) | |
| start_date | No | Earliest start (ISO-8601) | |
| printer_ids | No | Only jobs from these printer ids | |
| printer_types | No | ||
| printer_groups | No | Only jobs from printers in these groups | |
| archived_status | No | null (default): hide archived; archived: only archived; both: include archived | |
| accepted_statuses | No | Filter by job status. Multiple values OR-matched. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds value by mentioning 'pagination and filters', which clarifies behavioral scope beyond annotations. However, it doesn't detail rate limits, authentication needs, or response format, leaving some behavioral aspects implicit.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('List past print jobs') and adds essential context ('with pagination and filters'). There is zero waste or redundancy, making it highly concise and well-structured for quick comprehension.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 14 parameters, 0% schema coverage, no output schema, and no sibling differentiation, the description is inadequate. It covers the basic purpose but lacks parameter explanations, usage guidelines, and behavioral details like response format or error handling. Given the complexity, it should provide more contextual information to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 14 parameters, the description fails to compensate by explaining any parameters. It mentions 'filters' generically but doesn't specify which filters (e.g., date ranges, statuses) or their semantics. This leaves most parameters undocumented, creating a significant gap in understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('past print jobs') with additional context about capabilities ('with pagination and filters'). It distinguishes itself from siblings like 'get_print_job' (singular) and 'list_queue' (different resource), though it doesn't explicitly name alternatives. The purpose is specific and unambiguous.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'get_print_job' (for single jobs) or 'list_queue' (for queued items). It mentions pagination and filters but doesn't specify use cases or prerequisites. Without explicit when/when-not instructions, the agent must infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_queueCRead-onlyIdempotentInspect
List print queue items with optional filters. Supports filtering by assigned printer(s), group, approval status, tags, queue-item custom-field values, and age. Example: older_than_days=7 for "items added more than 7 days ago", or custom_fields=[{id: "", value: "Engineering"}] for a specific department field.
| Name | Required | Description | Default |
|---|---|---|---|
| p | No | ||
| pf | No | ||
| page | No | ||
| tags | No | Match items that have any of the given tag names | |
| compact | No | Keep at default true. Returns minimal data tuned for AI context. Only set to false if you specifically need fields the compact view drops, and then pair with limit to keep the response small. | |
| sort_by | No | Sort by this field. Default: sort_position (ascending — matches the queue's display order). | |
| group_id | No | Only return items in this queue group | |
| sort_dir | No | Sort direction; default asc. | |
| page_size | No | Items per page (1-100). Use this as the result-size cap. | |
| printer_id | No | Only return queue items assigned to (via for_printers) any of these printer ids | |
| added_after | No | Items created strictly after this ISO-8601 date | |
| added_before | No | Items created strictly before this ISO-8601 date | |
| custom_fields | No | Filter by custom field values. Each entry must specify the field id (uuid) and either value (exact match against string/number/boolean/date/options) or contains (case-insensitive substring on text fields). Multiple entries are AND-joined. | |
| approval_status | No | Filter by approval status. Default excludes denied. | |
| newer_than_days | No | Items created within the last this many days | |
| older_than_days | No | Items created more than this many days ago |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this as read-only, non-destructive, and idempotent, so the description doesn't need to repeat safety information. However, it adds the behavioral detail that it lists 'all items' (implying no filtering), which is useful context beyond annotations. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with zero wasted words. It's appropriately sized for a simple list operation and front-loads the essential information without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 undocumented parameters and no output schema, the description is insufficient. It doesn't explain what the parameters mean, what the return format looks like, or how results are structured (e.g., pagination, ordering). The annotations help with safety but don't cover functional completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The description mentions no parameters, while the input schema has two undocumented parameters (p and pf) with 0% schema description coverage. This leaves the agent with no semantic understanding of what these parameters do or when to use them, failing to compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('all items in the print queue'), making the tool's purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'list_pending_queue_items' or 'inspect_printer_queue', but the scope ('all items') provides some implicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'list_pending_queue_items' or 'get_queue_item'. It simply states what the tool does without context about appropriate use cases, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_queue_commentsBRead-onlyIdempotentInspect
Retrieve all approval comments on a queue item or user file.
| Name | Required | Description | Default |
|---|---|---|---|
| file_id | No | ||
| item_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key behavioral traits (read-only, non-destructive, idempotent, closed-world), so the description doesn't need to repeat these. It adds value by specifying the scope ('all approval comments') and the two possible input types (queue item or user file), which aren't captured in annotations. No contradiction with annotations is present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core functionality without unnecessary words. Every part of it contributes directly to understanding the tool's purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the annotations provide good behavioral context (read-only, etc.) and the tool is relatively simple (list operation with 2 parameters), the description is adequate but has gaps. It lacks output details (no schema), parameter guidance, and usage context, making it minimally viable but not fully comprehensive.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters 'file_id' and 'item_id' are undocumented in the schema. The description mentions 'queue item or user file', which hints at their purposes but doesn't explain their semantics, formats, or mutual exclusivity. This partial compensation is insufficient for full clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Retrieve') and target ('all approval comments on a queue item or user file'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'get_queue_item' or 'list_queue', which might also provide comment-related information, so it doesn't reach the highest score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context, or exclusions, leaving the agent to infer usage based on the tool name and parameters alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
list_queue_groupsBRead-onlyIdempotentInspect
List all queue groups in the user's account.
| Name | Required | Description | Default |
|---|---|---|---|
| item_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key traits (read-only, non-destructive, idempotent, closed-world), so the description adds minimal value. It implies a listing operation but doesn't disclose behavioral details like pagination, sorting, or error conditions. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action ('List all queue groups') without unnecessary details. Every word contributes directly to the purpose, making it highly concise and well-structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (no required parameters, simple listing) and rich annotations, the description is minimally adequate. However, without an output schema, it doesn't explain return values (e.g., format of queue groups), leaving a gap in completeness for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 1 parameter ('item_id'), the description doesn't mention parameters, leaving them undocumented. However, since no parameters are required and the tool likely functions without them, this is acceptable, though not ideal for clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('queue groups'), specifying scope ('all' and 'in the user's account'). It distinguishes from siblings like 'list_queue' or 'list_queue_comments' by focusing on queue groups, though it doesn't explicitly contrast them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'list_queue' or 'get_queue_item'. The description lacks context about prerequisites, such as needing an account, or exclusions, leaving the agent to infer usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
mark_filament_driedAInspect
Mark a filament spool as freshly dried (resets humidity tracking).
| Name | Required | Description | Default |
|---|---|---|---|
| dried_at | No | ||
| filament_id | Yes | The filament IDs (comma-separated for multiple) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-readOnly, non-destructive, non-idempotent mutation. The description adds context by specifying that it 'resets humidity tracking,' which is a behavioral detail not covered by annotations. However, it lacks information on permissions, side effects, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and effect. There's no wasted verbiage, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no output schema and moderate complexity (2 parameters, 1 required), the description covers the basic purpose and effect. However, it lacks details on usage context, parameter specifics, and behavioral nuances like idempotency or error handling, leaving gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 50% (only 'filament_id' has a description). The description mentions 'filament spool' and 'humidity tracking,' which loosely relates to parameters but doesn't explain 'dried_at' format or clarify that 'filament_id' can be comma-separated for multiple IDs. With partial schema coverage, it adds minimal value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Mark a filament spool as freshly dried') and the resource affected ('filament spool'), with the additional effect of 'resets humidity tracking' that distinguishes it from potential siblings like 'adjust_filament_weight' or 'assign_filament'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. While the purpose is clear, there's no mention of prerequisites (e.g., needing the filament to be physically dried first), exclusions, or related tools like 'get_filament_history' for context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
match_file_to_printersARead-onlyIdempotentInspect
Find which printer models in the account are compatible with a given file.
| Name | Required | Description | Default |
|---|---|---|---|
| model | Yes | ||
| gcodeAnalysis | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=true, destructiveHint=false, openWorldHint=false, and idempotentHint=true, covering safety and idempotency. The description adds context about checking compatibility, which is valuable behavioral information not covered by annotations. No contradictions exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that directly states the tool's purpose without unnecessary words. It is front-loaded and efficiently conveys the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with no output schema and 0% schema description coverage, the description is adequate but lacks details on return values, parameter usage, or error conditions. Annotations provide good behavioral coverage, but the description could better compensate for schema gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description mentions 'a given file' but does not clarify what 'model' or 'gcodeAnalysis' parameters represent or their expected formats. It adds minimal semantic value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with a specific verb ('Find') and resource ('printer models'), specifying the scope ('in the account') and condition ('compatible with a given file'). It distinguishes itself from sibling tools like 'list_printers' or 'get_printer' by focusing on compatibility matching rather than general listing or retrieval.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when checking file-printer compatibility, but does not explicitly state when to use this tool versus alternatives like 'list_printers' or 'get_slicer_settings_for_file'. It provides clear context for its purpose but lacks explicit exclusions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
move_fileCInspect
Move one or more files to a folder.
| Name | Required | Description | Default |
|---|---|---|---|
| files | Yes | ||
| folder | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-destructive operation (readOnlyHint: false, destructiveHint: false), which the description aligns with by implying a move action. However, the description adds minimal behavioral context beyond annotations—it doesn't specify if moves are atomic, overwrite existing files, or require permissions, missing opportunities to clarify operational details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no wasted words, front-loading the core action. It efficiently conveys the tool's purpose without unnecessary elaboration, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 2 parameters, 0% schema coverage, no output schema, and annotations covering only basic hints, the description is insufficient. It lacks details on parameter usage, error conditions, return values, or interaction with siblings like 'move_folder', leaving significant gaps for an AI agent to operate effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the input schema provides no details on 'files' (string) and 'folder' (integer) parameters. The description fails to compensate by explaining what 'files' represents (e.g., file IDs, paths, or names) or how 'folder' is identified (e.g., folder ID), leaving parameters semantically undefined.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Move') and resources ('one or more files', 'to a folder'), making the purpose immediately understandable. It distinguishes from siblings like 'move_folder' by specifying files as the target, though it doesn't explicitly contrast with 'update_file' or 'delete_file' for file operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'update_file' (which might handle file metadata) or 'move_folder' (for moving folders). The description lacks context about prerequisites, such as whether files must exist or the folder must be valid, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
move_folderBInspect
Move a folder to another location.
| Name | Required | Description | Default |
|---|---|---|---|
| folder | Yes | ||
| target | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-idempotent, non-destructive operation (readOnlyHint=false, idempotentHint=false, destructiveHint=false). The description adds that it moves folders, implying mutation and potential side effects, but doesn't elaborate on permissions, error conditions, or what happens to nested content. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no wasted words, efficiently conveying the core action. It's appropriately sized for a simple operation and front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no output schema and 0% schema description coverage, the description is insufficient. It lacks details on parameters, return values, error handling, and behavioral nuances (e.g., effects on nested items, permissions required), leaving significant gaps for an AI agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, with two required integer parameters ('folder' and 'target') undocumented in the schema. The description only vaguely implies these parameters represent folder and target locations, without specifying what the integers refer to (e.g., IDs, paths) or any format constraints, failing to compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('move') and resource ('folder'), specifying the operation as moving to another location. It distinguishes from sibling 'move_file' by focusing on folders, but doesn't explicitly differentiate from other folder operations like 'create_folder' or 'delete_folder'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites (e.g., needing folder IDs), constraints (e.g., cannot move to same location), or when to choose this over similar tools like 'move_file' or 'update_file'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
move_printer_axisBInspect
Move a printer axis to a relative position
| Name | Required | Description | Default |
|---|---|---|---|
| speed | No | Movement speed in mm/min. If not specified, uses printer default. | |
| distance | Yes | Distance to move in millimeters | |
| direction | Yes | Axis and direction to move. Use axis letter, optionally with minus for negative. Examples: "X" (+X), "X-" (-X), "Y", "Y-", "Z", "Z-" | |
| printer_id | Yes | The printer IDs (comma-separated for multiple) | |
| snippet_id | No | ID of a gcode snippet to send. Either gcode, macro, or snippet_id is required. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already indicate this is a non-readOnly, non-destructive operation, but the description adds minimal behavioral context. It specifies 'relative position' movement, which is useful beyond annotations, but doesn't mention safety considerations (e.g., collision risks), physical effects, or error conditions. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that directly states the tool's purpose without any fluff or redundancy. It's appropriately sized and front-loaded, making it easy to understand at a glance while efficiently using minimal words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 5 parameters, no output schema, and annotations covering basic safety (non-destructive), the description is minimally adequate. It states the core action but lacks context about typical workflows, error handling, or integration with other tools (e.g., checking printer status first). Given the complexity, it should provide more operational guidance.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, all parameters are well-documented in the schema itself. The description doesn't add any meaningful parameter semantics beyond what's already in the schema (e.g., explaining relationships between parameters or usage patterns). The baseline of 3 is appropriate given the comprehensive schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Move') and target ('a printer axis to a relative position'), providing a specific verb+resource combination. However, it doesn't differentiate from sibling tools like 'home_printer' or 'send_gcode' which also involve printer movement, leaving room for improvement in distinguishing its specific scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites (e.g., printer must be connected/idle), comparison to similar tools like 'home_printer' (absolute positioning) or 'send_gcode' (custom commands), or typical use cases (e.g., manual adjustment during maintenance).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
move_queue_itemBInspect
Move one or more queue items to a different queue group.
| Name | Required | Description | Default |
|---|---|---|---|
| jobs | Yes | ||
| target_group_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide basic safety information (readOnlyHint=false, destructiveHint=false), indicating this is a non-destructive mutation. The description adds that it moves items between queue groups, which is useful context beyond annotations. However, it doesn't disclose important behavioral details like whether this operation requires specific permissions, what happens to item ordering, or if there are constraints on which items/groups can be targeted.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that communicates the core functionality without unnecessary words. It's appropriately sized for the tool's apparent complexity and front-loads the essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 2 undocumented parameters, 0% schema coverage, no output schema, and only basic annotations, the description is incomplete. It doesn't explain parameter formats, constraints, error conditions, or what constitutes valid 'queue items' and 'queue groups'. The context signals indicate significant gaps that the description doesn't adequately address.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and no output schema, the description carries full burden for parameter explanation. It mentions 'queue items' and 'different queue group' which loosely map to the 'jobs' and 'target_group_id' parameters, but provides no details about format (e.g., what 'jobs' string represents), constraints, or expected outcomes. This is insufficient for a tool with 2 undocumented parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Move') and resource ('queue items') with scope ('to a different queue group'), providing a specific verb+resource combination. However, it doesn't explicitly distinguish this tool from sibling tools like 'reorder_queue_item' or 'reorder_queue_group', which might involve similar queue manipulation concepts.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There are multiple sibling tools that manipulate queue items (e.g., 'reorder_queue_item', 'remove_from_queue', 'approve_queue_item'), but the description doesn't indicate when moving is appropriate versus other operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
pause_printBInspect
Pause the current print on a printer
| Name | Required | Description | Default |
|---|---|---|---|
| printer_id | Yes | The printer IDs (comma-separated for multiple) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide safety information (readOnlyHint=false, destructiveHint=false), indicating this is a non-destructive write operation. The description adds minimal behavioral context beyond this, stating it affects 'the current print' but not detailing effects like pausing mid-print or whether it's reversible. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no wasted words. It's front-loaded with the core action and resource, making it immediately scannable and efficient for an agent to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a simple mutation tool with one parameter and good annotations, the description is minimally adequate. However, without an output schema, it doesn't explain what happens after pausing (e.g., success confirmation, error states), and it lacks context about interactions with sibling tools like 'resume_print', leaving gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the single parameter 'printer_id' is fully documented in the schema. The description doesn't add any semantic details about the parameter beyond what's in the schema (e.g., why pausing requires printer_id, format implications), so it meets the baseline for high schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('pause') and target resource ('current print on a printer'), making the purpose immediately understandable. However, it doesn't explicitly differentiate from sibling tools like 'cancel_print' or 'resume_print', which would require mentioning it's a temporary suspension rather than termination.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives like 'cancel_print' or 'resume_print'. The description doesn't mention prerequisites (e.g., requires an active print) or contextual constraints, leaving the agent to infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
remove_from_queueBDestructiveInspect
Remove an item from the print queue
| Name | Required | Description | Default |
|---|---|---|---|
| job | No | ||
| jobs | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true, readOnlyHint=false, and non-idempotent, which the description aligns with by implying a mutation ('Remove'). The description adds value by specifying the target ('print queue'), but doesn't detail side effects like whether removal is permanent or reversible, or if it affects printer status. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, direct sentence with no wasted words, making it easy to parse. It's front-loaded with the core action, though brevity contributes to gaps in other dimensions like parameter guidance.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 0% schema coverage, no output schema, and multiple sibling tools, the description is insufficient. It lacks parameter details, usage context, and behavioral specifics beyond the basic action, leaving significant gaps for an agent to operate effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, with two parameters ('job' as integer and 'jobs' as string) undocumented in the schema. The description provides no information about these parameters, such as their purpose, how they interact, or which to use, failing to compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Remove') and target resource ('an item from the print queue'), making the purpose immediately understandable. It doesn't explicitly differentiate from siblings like 'cancel_print' or 'empty_queue', but the specificity is adequate for understanding the core function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'cancel_print', 'empty_queue', or 'delete_queue_item'. It lacks context about prerequisites, such as whether the item must be in a specific state, or exclusions, leaving the agent to infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reorder_queue_groupBInspect
Move a queue group to a new position.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | ||
| group_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a mutable, non-idempotent, non-destructive operation (readOnlyHint: false, destructiveHint: false, idempotentHint: false). The description adds that it moves a queue group to a new position, implying reordering behavior, but doesn't detail effects on other groups, error conditions, or permissions required. With annotations covering safety, the description provides basic context without rich behavioral insights.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that directly states the tool's function without unnecessary words. It's front-loaded and efficiently conveys the core action, making it easy for an agent to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutation with 2 parameters), lack of output schema, and 0% schema description coverage, the description is insufficient. It doesn't explain what happens after reordering, potential side effects, or error handling, leaving gaps for the agent to operate effectively in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema only defines 'to' and 'group_id' as integers with minimum values. The description mentions moving to a 'new position' and references 'group_id', but doesn't explain what these parameters represent (e.g., position index, group identifier) or their constraints beyond the schema, failing to compensate for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Move') and target resource ('a queue group'), specifying the purpose as repositioning it. It distinguishes from siblings like 'reorder_queue_item' by focusing on groups rather than individual items, but doesn't explicitly contrast with other queue manipulation tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'move_queue_item' or 'save_queue_group'. The description lacks context about prerequisites, such as whether the group must exist or be in a specific state, leaving the agent to infer usage scenarios.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reorder_queue_itemBInspect
Move a single queue item to a new 1-based position.
| Name | Required | Description | Default |
|---|---|---|---|
| to | Yes | ||
| from | No | ||
| queue_item_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide safety hints (readOnlyHint=false, destructiveHint=false, idempotentHint=false), but the description adds useful context about the 1-based positioning system. However, it doesn't explain side effects, error conditions, or what happens to other items in the queue during reordering, which would be valuable for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action. Every word earns its place with no redundancy or unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 3 parameters (0% schema coverage), no output schema, and no behavioral details beyond basic positioning, the description is incomplete. It doesn't explain return values, error handling, or the full impact of reordering, leaving significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 3 parameters, the description only mentions 'to' (new position) but doesn't explain 'queue_item_id' or 'from' parameters. It fails to compensate for the schema's lack of descriptions, leaving key parameters semantically unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Move') and resource ('a single queue item') with precise scope ('to a new 1-based position'). It distinguishes from sibling tools like 'move_queue_item' by specifying reordering within a queue rather than moving between queues or other operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'move_queue_item' or 'reorder_queue_group'. It doesn't mention prerequisites, context, or exclusions, leaving the agent to infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resubmit_queue_itemAInspect
Resubmit a denied or revision-requested queue item back to pending approval. File-replacement is not supported via MCP.
| Name | Required | Description | Default |
|---|---|---|---|
| comment | No | ||
| queue_item_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide basic hints (non-readOnly, non-destructive, etc.), but the description adds valuable context: it specifies the tool's limitation ('File-replacement is not supported via MCP'), which is crucial for understanding its behavior beyond the annotations. However, it doesn't detail side effects like rate limits or authentication needs, leaving some gaps.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences, front-loaded with the core purpose and followed by a critical limitation. Every word earns its place, with no redundancy or fluff, making it highly efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (mutation with 2 parameters) and no output schema, the description covers the essential purpose and a key limitation. However, it lacks details on return values, error conditions, or prerequisites, leaving some contextual gaps that could hinder full understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'queue_item_id' implicitly but doesn't explain the 'comment' parameter's purpose or format. The description adds minimal parameter semantics beyond the schema, failing to fully address the coverage gap, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('resubmit'), target resource ('denied or revision-requested queue item'), and outcome ('back to pending approval'). It also distinguishes from sibling tools by specifying what it doesn't do ('File-replacement is not supported via MCP'), making it distinct from tools like 'update_file' or 'send_back_for_revision'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use this tool: for 'denied or revision-requested queue items.' It implies an alternative (file-replacement is not supported), but doesn't name specific sibling alternatives like 'send_back_for_revision' or 'update_queue_item' for different scenarios. The context is clear but lacks explicit exclusions or named alternatives.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resume_printAInspect
Resume a paused print on a printer
| Name | Required | Description | Default |
|---|---|---|---|
| printer_id | Yes | The printer IDs (comma-separated for multiple) |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false, destructiveHint=false, and idempotentHint=false, indicating this is a non-destructive, non-idempotent write operation. The description adds that it resumes 'a paused print', providing useful context about the prerequisite state. However, it doesn't mention potential side effects, authentication requirements, or rate limits beyond what annotations cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, focused sentence with no wasted words. It's front-loaded with the core action and resource, making it immediately understandable without unnecessary elaboration.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a single-parameter tool with good annotations and no output schema, the description provides adequate context about what the tool does. It could be more complete by mentioning what happens if the printer isn't paused or what the expected outcome is, but it covers the essential purpose well given the tool's simplicity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the input schema fully documents the 'printer_id' parameter. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline expectation without providing extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Resume a paused print') and the target resource ('on a printer'), distinguishing it from sibling tools like 'pause_print' and 'cancel_print'. It uses precise language that leaves no ambiguity about the tool's function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying 'a paused print', suggesting it should be used when a print is already paused. However, it doesn't explicitly state when NOT to use it (e.g., on a running or completed print) or mention specific alternatives like 'pause_print' for the opposite action.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revive_queue_itemAInspect
Bring a completed (done) queue item back to the active queue.
| Name | Required | Description | Default |
|---|---|---|---|
| queue_item_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a mutation (readOnlyHint: false) that is non-destructive, non-idempotent, and closed-world. The description adds valuable context: it revives 'completed' items, implying a state change that might affect queue order or workflow. It doesn't detail side effects like notifications or permissions, but provides more than annotations alone.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and context. Every word contributes: 'bring back' defines the action, 'completed (done) queue item' specifies the target, and 'active queue' states the outcome, with zero redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with no output schema and minimal annotations, the description adequately covers the purpose and usage context. However, it lacks details on behavioral outcomes (e.g., what happens to queue order, error conditions, or response format), leaving gaps given the tool's potential complexity in a queue management system.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, with one required parameter 'queue_item_id'. The description doesn't mention parameters at all, so it adds no semantic value beyond the schema. However, with only one parameter and clear purpose, the baseline is 3 as the schema minimally suffices.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('bring back') and resource ('completed (done) queue item'), specifying it transitions items from 'completed' to 'active' state. This distinguishes it from siblings like 'resubmit_queue_item' (which likely handles different statuses) or 'move_queue_item' (which changes position without status change).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly states when to use it: for 'completed (done) queue items' to return them to 'active queue'. It implies an alternative state transition but doesn't name specific sibling tools (e.g., 'resubmit_queue_item' might be for failed items) or mention when not to use it (e.g., for pending items).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
save_queue_groupBInspect
Create a new queue group or update an existing one.
| Name | Required | Description | Default |
|---|---|---|---|
No parameters | |||
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-destructive, non-idempotent, and non-open-world tool, covering key behavioral traits. The description adds value by specifying it handles both creation and updates, which clarifies its dual functionality beyond annotations. However, it does not disclose additional details like authentication needs, rate limits, or error conditions, leaving some behavioral aspects uncovered.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence that directly states the tool's purpose without any redundant or extraneous information. It is front-loaded and efficiently conveys the essential action, making it highly concise and well-structured for quick understanding.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no parameters and annotations cover basic behavioral traits, the description is minimally adequate. However, it lacks an output schema and does not explain return values or potential side effects, such as what happens on creation versus update. For a mutation tool, more context on outcomes would improve completeness, but it meets the baseline for this complexity level.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0 parameters and 100% schema description coverage, the input schema fully documents the lack of parameters. The description does not add parameter-specific information, which is unnecessary here. A baseline of 4 is appropriate as the schema handles all parameter semantics effectively, and the description does not need to compensate for any gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new queue group or update an existing one') and identifies the resource ('queue group'), which is specific and unambiguous. However, it does not explicitly differentiate this tool from its sibling 'update_queue_item' or 'reorder_queue_group', which could handle related operations, leaving some room for improvement in sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'create_folder' for organizational purposes or 'update_queue_item' for modifying queue items. It lacks context on prerequisites, exclusions, or specific scenarios, offering minimal usage direction beyond the basic action stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
send_back_for_revisionADestructiveInspect
Revoke approval on an already-approved item and send it back to the submitter for revisions.
| Name | Required | Description | Default |
|---|---|---|---|
| comment | No | ||
| queue_item_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations: it explains that the tool revokes approval and sends items back for revision, which clarifies the destructive nature hinted at by annotations (destructiveHint: true). It doesn't contradict annotations, and while annotations cover safety aspects, the description provides operational meaning to the destruction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and context without unnecessary words. Every part of the sentence contributes directly to understanding the tool's purpose and usage.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema, the description is reasonably complete: it explains the action, target, and outcome. However, it could benefit from mentioning potential side effects or confirmation of success, given the destructive nature. Annotations provide safety hints, but the description covers the operational intent well.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description doesn't add any parameter-specific information beyond what the schema provides. It doesn't explain what 'queue_item_id' or 'comment' represent, leaving parameters undocumented. Baseline is 3 as the schema handles parameter definitions, but the description doesn't compensate for the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Revoke approval' and 'send it back to the submitter for revisions') and identifies the resource ('an already-approved item'). It distinguishes from siblings like 'approve_queue_item' and 'deny_queue_item' by specifying it's for items that have already been approved.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context by specifying 'an already-approved item,' indicating when to use this tool. However, it doesn't explicitly state when not to use it or mention alternatives like 'deny_queue_item' for items not yet approved, which would be helpful for sibling differentiation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
send_gcodeADestructiveInspect
Send G-code commands to a printer
| Name | Required | Description | Default |
|---|---|---|---|
| gcode | No | Array of G-code commands to send. Either gcode, macro, or snippet_id is required. | |
| macro | No | Predefined macro name to execute. Either gcode, macro, or snippet_id is required. | |
| printer_id | Yes | The printer IDs (comma-separated for multiple) | |
| snippet_id | No | ID of a gcode snippet to send. Either gcode, macro, or snippet_id is required. | |
| macro_context | No | Optional context data for macro execution. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description doesn't add behavioral context beyond what annotations provide, but annotations are comprehensive: destructiveHint=true indicates potential physical consequences, readOnlyHint=false confirms write capability, openWorldHint=true suggests flexible input, and idempotentHint=false warns about potential duplicate effects. The description doesn't contradict these annotations, so it meets the lower bar with annotations present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise at 6 words, front-loading the essential action and target. Every word earns its place with zero waste or redundancy, making it immediately scannable and understandable.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 5 parameters and no output schema, the description is minimal but annotations provide critical behavioral context. The combination is adequate but not rich - a more complete description would explain typical use cases, safety considerations, or expected outcomes given the destructive nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already documents all 5 parameters thoroughly. The description adds no parameter semantics beyond the schema's detailed descriptions of gcode, macro, printer_id, snippet_id, and macro_context parameters. This meets the baseline expectation when schema coverage is complete.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Send') and resource ('G-code commands to a printer'), making the purpose immediately understandable. However, it doesn't differentiate this tool from sibling tools like 'move_printer_axis' or 'home_printer' which also involve printer control, missing an opportunity for sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With many sibling tools for printer control (move_printer_axis, home_printer, pause_print, etc.), there's no indication of when direct G-code sending is appropriate versus using higher-level commands.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_custom_field_valuesBInspect
Set custom field values on one or more entities (queue items, files, printers, etc.). Use this to set a "deadline" or similar field across many queue items at once.
| Name | Required | Description | Default |
|---|---|---|---|
| values | Yes | ||
| category | Yes | ||
| entityIds | Yes | ||
| subCategory | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare this is not read-only, not open-world, not idempotent, and not destructive. The description adds value by specifying this is for setting custom fields (not standard properties) and works on multiple entities at once. However, it doesn't disclose important behavioral aspects like whether this overwrites existing values, requires specific permissions, or has rate limits - gaps that are significant given the mutation nature and lack of output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that each serve a purpose: the first states the core functionality, and the second provides a concrete use case. It's front-loaded with the essential information. While efficient, it could potentially benefit from slightly more structure to separate purpose from examples.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a mutation tool with 4 parameters (0% schema coverage), no output schema, and annotations that only cover basic safety hints, the description is insufficient. It doesn't explain what happens when setting values (overwrite vs append), what the response contains, error conditions, or how it differs from similar update tools in the sibling list. The example helps but doesn't compensate for the significant contextual gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 4 parameters (3 required), the description provides minimal parameter guidance. It mentions 'deadline' as an example field type and 'queue items' as example entities, which gives some context for the 'values' and 'entityIds' parameters. However, it doesn't explain the 'category' parameter's enum values or the optional 'subCategory' parameter, leaving significant semantic gaps for the agent to interpret.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Set custom field values') and target resources ('entities (queue items, files, printers, etc.)'), providing a specific verb+resource combination. It gives a concrete example ('deadline' field) that helps illustrate the purpose. However, it doesn't explicitly differentiate this tool from potential siblings like 'update_queue_item' or 'update_file' that might also modify entity properties.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context with the phrase 'across many queue items at once,' suggesting batch operations as a use case. However, it doesn't explicitly state when to use this tool versus alternatives like 'update_queue_item' or 'update_file' that appear in the sibling list. No explicit exclusions or prerequisites are mentioned, leaving usage guidelines at an implied level.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_file_printersBInspect
Set the printers/models/groups a file is assigned to.
| Name | Required | Description | Default |
|---|---|---|---|
| items | Yes | ||
| models | No | ||
| printers | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a mutable (readOnlyHint: false), non-destructive, non-idempotent operation, but the description adds no behavioral context beyond this. It doesn't explain what 'set' entails (e.g., overwriting existing assignments, requiring permissions, or affecting print queues), so it relies heavily on annotations without enhancing understanding.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words, front-loading the core action. It's appropriately sized for the tool's apparent complexity, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given no output schema, low schema coverage, and annotations that only cover basic hints, the description is incomplete. It doesn't address what the tool returns, how errors are handled, or the implications of setting assignments in a printing context, leaving significant gaps for an agent to operate effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 3 parameters, the description fails to compensate by explaining what 'items', 'models', or 'printers' represent. It mentions 'printers/models/groups' but doesn't clarify which parameter corresponds to which, leaving semantics ambiguous and unhelpful beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Set') and the target ('printers/models/groups a file is assigned to'), making the purpose understandable. It doesn't differentiate from siblings like 'assign_filament' or 'match_file_to_printers', but the verb+resource combination is specific enough for basic understanding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'assign_filament' or 'match_file_to_printers'. The description lacks context about prerequisites, such as whether the file must exist or be in a specific state, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_printer_fansBInspect
Control the printer cooling fans
| Name | Required | Description | Default |
|---|---|---|---|
| speed | Yes | Fan speed from 0 (off) to 255 (full speed). Use 0 to turn fans off. | |
| printer_id | Yes | The printer IDs (comma-separated for multiple) | |
| snippet_id | No | ID of a gcode snippet to send. Either gcode, macro, or snippet_id is required. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-destructive, non-idempotent, non-open-world tool, covering basic safety. The description adds minimal behavioral context beyond this, such as implying it's for cooling control, but doesn't detail effects like temperature changes or error handling. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words, making it easy to parse and front-loaded with essential information. It efficiently conveys the core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, 100% schema coverage, and annotations, the description is minimally adequate. However, without an output schema, it doesn't explain return values or error conditions, and it lacks context on when to use it, leaving gaps in overall completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, so parameters are well-documented in the schema. The description adds no additional meaning beyond the schema, such as explaining interactions between parameters or use cases, but this is acceptable given the high coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Control') and resource ('printer cooling fans'), making the purpose evident. However, it doesn't differentiate from sibling tools like 'set_printer_motors' or 'send_gcode', which might also involve printer control, so it's not fully specific to siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, exclusions, or related tools, leaving the agent to infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
set_printer_motorsBInspect
Enable or disable the printer stepper motors
| Name | Required | Description | Default |
|---|---|---|---|
| enabled | Yes | Motor state: "on" to enable motors (lock axes), "off" to disable (allow manual movement) | |
| printer_id | Yes | The printer IDs (comma-separated for multiple) | |
| snippet_id | No | ID of a gcode snippet to send. Either gcode, macro, or snippet_id is required. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-destructive, non-idempotent, closed-world operation. The description adds that it controls 'stepper motors' and implies physical effects (enabling locks axes, disabling allows manual movement), providing some behavioral context beyond annotations. However, it doesn't cover error conditions, permissions, or side effects like power consumption changes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that directly states the tool's purpose with no wasted words. It's appropriately sized and front-loaded, making it easy to parse quickly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (controls physical hardware), lack of output schema, and rich annotations, the description is minimally adequate. It covers the basic action but lacks details on outcomes, error handling, or integration with other printer operations, leaving gaps in completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 100%, with clear parameter documentation in the schema (e.g., 'enabled' enum values explained). The description doesn't add any parameter semantics beyond what the schema provides, so it meets the baseline of 3 for high schema coverage without extra value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Enable or disable') and the resource ('printer stepper motors'), providing a specific verb+resource combination. However, it doesn't differentiate from sibling tools like 'home_printer' or 'move_printer_axis' that also control printer hardware, missing explicit sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., printer must be idle), exclusions (e.g., not during printing), or related tools like 'send_gcode' for motor control, leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
unarchive_print_jobBInspect
Restore an archived print job.
| Name | Required | Description | Default |
|---|---|---|---|
| jobs | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations cover key behavioral traits (readOnlyHint=false, destructiveHint=false, etc.), so the description's burden is lower. It adds context by specifying 'archived' as a precondition, which isn't in the annotations. However, it doesn't disclose rate limits, authentication needs, or what 'restore' entails operationally (e.g., does it return the job to a queue?).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with zero wasted words. It's front-loaded with the core action and resource, making it easy to parse quickly. Every word earns its place without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (a mutation with no output schema) and rich annotations, the description is minimally complete. It states what the tool does but lacks details on outcomes, error conditions, or integration with sibling tools. Annotations cover safety, but more operational context would help.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description doesn't add any parameter details beyond implying 'jobs' refers to archived print jobs. With only one parameter and no schema descriptions, the baseline is 3—adequate but minimal, as the agent must rely on the schema's structure alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Restore') and the resource ('an archived print job'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'archive_print_job' or 'revive_queue_item', but the verb 'restore' implies reversing an archive operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'revive_queue_item' or 'resubmit_queue_item'. It doesn't mention prerequisites (e.g., that the job must be archived first) or exclusions, leaving the agent to infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
unassign_filamentBInspect
Remove a filament spool from its printer assignment.
| Name | Required | Description | Default |
|---|---|---|---|
| source | No | ||
| filament_id | Yes | The filament ID | |
| location_id | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-readOnly, non-destructive, non-idempotent operation. The description adds that it removes an assignment, which aligns with annotations but doesn't provide additional behavioral context like permission requirements, side effects, or what happens if the filament isn't currently assigned. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, clear sentence with no wasted words. It's front-loaded with the core action and resource, making it efficient and easy to parse.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, 33% schema coverage, no output schema, and annotations covering basic hints, the description is adequate but incomplete. It explains the core purpose but lacks details on parameter usage, behavioral nuances, or output expectations, leaving gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 33% (only 'filament_id' has a description). The description mentions 'filament spool' which hints at 'filament_id', but doesn't explain 'source' or 'location_id' parameters. It adds minimal value beyond the schema, resulting in a baseline score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Remove') and resource ('filament spool from its printer assignment'), making the purpose immediately understandable. It doesn't explicitly differentiate from sibling tools like 'assign_filament', but the verb 'Remove' versus 'Assign' provides implicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, prerequisites, or constraints. It doesn't mention the sibling 'assign_filament' tool or any other related operations, leaving the agent to infer usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_fileCInspect
Update file metadata such as name and GCODE analysis.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| item_id | Yes | ||
| analysis | No | ||
| printers | No | ||
| printer_models | No | ||
| remove_thumbnail | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-destructive, non-idempotent, closed-world operation, which the description doesn't contradict. However, the description adds minimal behavioral context beyond annotations—it hints at metadata updates but doesn't detail effects like permission requirements, rate limits, or what happens to unspecified fields. With annotations covering basic traits, it earns a baseline score for not adding much value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose without unnecessary details. It could be slightly more structured by listing all updatable fields, but it avoids redundancy and wastes no words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with 0% schema coverage, no output schema, and annotations that don't fully explain behavior, the description is incomplete. It should cover more parameters, usage context, and expected outcomes to be adequate for this mutation tool's complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but only mentions 'name and GCODE analysis' (likely mapping to 'name' and 'analysis' parameters), ignoring the other 4 parameters ('item_id', 'printers', 'printer_models', 'remove_thumbnail'). This partial coverage fails to adequately explain parameter meanings, leaving most undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Update') and resource ('file metadata'), and specifies what can be updated ('name and GCODE analysis'). However, it doesn't explicitly differentiate from sibling tools like 'set_file_printers' or 'move_file', which might also modify file-related attributes, so it misses full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. For example, it doesn't mention when to choose 'update_file' over 'set_file_printers' or 'move_file', nor does it specify prerequisites or exclusions, leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_queue_commentBInspect
Edit an approval comment you authored.
| Name | Required | Description | Default |
|---|---|---|---|
| comment | Yes | ||
| item_id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a mutation tool (readOnlyHint: false) that is non-destructive and non-idempotent. The description adds context by specifying it edits 'an approval comment you authored,' implying ownership/authorization requirements, which isn't covered by annotations. However, it lacks details on rate limits, error conditions, or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose ('Edit an approval comment you authored') with zero wasted words. It's appropriately sized for a tool with two parameters and clear annotations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's mutation nature (annotations show readOnlyHint: false) and lack of output schema, the description is minimally adequate but incomplete. It covers the basic purpose and ownership context but misses details on error handling, response format, and full parameter semantics, which are important for a write operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'comment' and implies 'item_id' through 'an approval comment,' but doesn't explain what 'item_id' refers to (e.g., queue item ID) or provide format/constraints beyond the schema. This leaves significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Edit') and target resource ('an approval comment you authored'), providing a specific verb+resource combination. It distinguishes itself from siblings like 'add_queue_comment' and 'delete_queue_comment' by focusing on editing existing comments rather than creating or deleting them.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context through 'you authored,' suggesting it's for editing one's own comments, but doesn't explicitly state when to use this tool versus alternatives like 'add_queue_comment' or 'delete_queue_comment.' No explicit exclusions or prerequisites are provided, leaving some ambiguity.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
update_queue_itemBInspect
Update a queue item (amount, note, custom print time, material usage, printer assignments).
| Name | Required | Description | Default |
|---|---|---|---|
| note | No | ||
| time | No | ||
| amount | No | ||
| printed | No | ||
| for_groups | No | ||
| for_models | No | ||
| for_printers | No | ||
| queue_item_id | Yes | ||
| material_usage | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a non-read-only, non-destructive, non-idempotent, non-open-world operation. The description adds value by specifying what fields can be updated, which provides context beyond the annotations. However, it doesn't mention important behavioral aspects like permission requirements, error conditions, or what happens to unspecified fields during update. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action and lists key fields without unnecessary words. Every element serves a purpose, making it appropriately sized for the tool's complexity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 9 parameters with 0% schema coverage, no output schema, and annotations that only cover basic hints, the description is insufficient. It doesn't explain the update semantics (e.g., partial vs. full updates), required permissions, error handling, or return values. For a mutation tool with many undocumented parameters, this leaves significant gaps in understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 9 parameters, the description carries full burden for parameter documentation. It lists 5 updateable fields (amount, note, custom print time, material usage, printer assignments), which partially maps to parameters like 'amount', 'note', 'time', 'material_usage', and printer-related fields. However, it misses explaining parameters like 'printed', 'for_groups', 'for_models', and doesn't clarify the relationship between listed fields and actual parameter names or their semantics.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and resource ('a queue item'), and lists specific fields that can be modified (amount, note, custom print time, material usage, printer assignments). This provides a specific verb+resource combination, though it doesn't explicitly differentiate from sibling tools like 'move_queue_item' or 'reorder_queue_item' which might also modify queue items in different ways.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a queue_item_id), when it's appropriate versus other queue modification tools, or any constraints on usage. This leaves the agent without contextual decision-making help.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!