Contabo (VPS) MCP Server
Server Details
Contabo API (v1.0.0) as MCP tools for cloud provisioning, and management. Powered by HAPI MCP server
- Status
- Healthy
- Last Tested
- Transport
- Streamable HTTP
- URL
- Repository
- la-rebelion/hapimcp
- GitHub Stars
- 7
Glama MCP Gateway
Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.
Full call logging
Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.
Tool access control
Enable or disable individual tools per connector, so you decide what your agents can and cannot do.
Managed credentials
Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.
Usage analytics
See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.
Tool Definition Quality
Score is being calculated. Check back soon.
Available Tools
124 toolsassignInstancePrivateNetworkAdd instance to a Private NetworkDDestructiveInspect
Add instance to a Private Network - Add a specific instance to a Private Network
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| privateNetworkId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, but the description adds no behavioral context beyond the basic action. It does not explain what 'destructive' entails (e.g., network disruption, instance downtime), permissions required, or side effects, leaving significant gaps despite annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single repetitive sentence ('Add instance to a Private Network - Add a specific instance to a Private Network'), which is under-specified rather than concise. It wastes space on redundancy without adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 4 parameters (0% schema coverage) and no output schema, the description is severely incomplete. It lacks details on behavior, parameters, outcomes, and error conditions, making it inadequate for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, and the description provides no information about parameters. It does not explain what 'instanceId' or 'privateNetworkId' represent, their formats, or the purpose of 'x-request-id' and 'x-trace-id', failing to compensate for the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description restates the title and name with minimal variation ('Add instance to a Private Network - Add a specific instance to a Private Network'), making it tautological. It does not specify what 'add' entails operationally or distinguish it from siblings like 'unassignInstancePrivateNetwork'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives, prerequisites, or exclusions. It does not reference sibling tools like 'unassignInstancePrivateNetwork' or 'createPrivateNetwork' for context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
assignIpAssign a VIP to an VPS/VDS/Bare MetalCDestructiveInspect
Assign a VIP to an VPS/VDS/Bare Metal - Assign a VIP to a VPS/VDS/Bare Metal using the machine id.
| Name | Required | Description | Default |
|---|---|---|---|
| ip | Yes | ||
| resourceId | Yes | ||
| x-trace-id | No | ||
| resourceType | Yes | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a destructive, non-read-only operation, which aligns with the 'assign' action implying mutation. However, the description adds minimal behavioral context beyond this—it doesn't mention side effects, permission requirements, idempotency, or what happens if the VIP is already assigned. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, repetitive sentence that restates the title with minimal added value. It's front-loaded but under-specified, wasting space on redundancy ('Assign a VIP to an VPS/VDS/Bare Metal - Assign a VIP to a VPS/VDS/Bare Metal') instead of providing useful details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 5 parameters, 0% schema coverage, and no output schema, the description is inadequate. It lacks critical context: what a VIP is, how assignment affects existing configurations, error conditions, or return values. Sibling tools like 'retrieveVip' suggest VIP management complexity that isn't addressed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 5 parameters (4 required), the description provides no semantic information about parameters like 'ip', 'resourceId', 'resourceType', or the UUID-patterned 'x-request-id'. It mentions 'machine id' vaguely but doesn't clarify which parameter this corresponds to or how to use enums like 'instances' vs 'bare-metal'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('Assign a VIP') and target resources ('VPS/VDS/Bare Metal'), but it's repetitive and vague about what 'VIP' means (Virtual IP?). It doesn't clearly differentiate from sibling tools like 'unassignIp' or explain the specific relationship between VIP assignment and other networking operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'unassignIp' or networking-related siblings. The description mentions 'using the machine id' but doesn't explain prerequisites, dependencies, or typical scenarios for VIP assignment versus regular IP configuration.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
bulkDeleteDnsZoneRecordsBulk delete DNS zone recordsCDestructiveInspect
Bulk delete DNS zone records - Delete multiple zone records from a DNS Zone
| Name | Required | Description | Default |
|---|---|---|---|
| zoneName | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| bulkDeleteDnsZoneRecordsBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, covering safety and mutation aspects. The description adds minimal context by specifying 'bulk' deletion, implying multiple records, but lacks details on permissions, rate limits, or irreversible effects. It doesn't contradict annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but repetitive ('Bulk delete DNS zone records - Delete multiple zone records from a DNS Zone'), wasting words on redundancy. It's front-loaded but could be more efficient by eliminating tautological phrasing.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 4 parameters (3 required), 0% schema coverage, no output schema, and nested objects, the description is inadequate. It doesn't address parameter meanings, behavioral nuances beyond annotations, or expected outcomes, leaving significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but fails to explain parameters. It mentions 'zone records' but doesn't clarify 'zoneName', 'recordIds', or header parameters like 'x-request-id', leaving semantics unclear beyond what the schema minimally provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('bulk delete') and resource ('DNS zone records'), making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like 'deleteDnsZoneRecord' (singular delete) or 'deleteDnsZone' (entire zone deletion), which would require a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'deleteDnsZoneRecord' for single deletions or 'deleteDnsZone' for zone removal. The description merely restates the purpose without contextual usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cancelDomainCancel a specific domainCDestructiveInspect
Cancel a specific domain - Cancel a specific domain
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| cancelDomainBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description does not contradict. The description adds no behavioral details beyond these annotations, such as rate limits, authentication needs, or effects on related resources. However, since annotations cover the core safety profile, the description meets the lower bar without adding value, scoring above baseline but not excelling.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive ('Cancel a specific domain - Cancel a specific domain'), wasting space without adding value. It lacks structure and front-loading of useful information, making it inefficient despite its brevity. Every sentence does not earn its place, as it merely echoes the title.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive operation with 4 parameters, nested objects, no output schema, and 0% schema coverage), the description is inadequate. It fails to explain parameter meanings, usage scenarios, or expected outcomes, leaving significant gaps for the agent to understand and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters like 'domain', 'x-request-id', and 'cancelDomainBody' are undocumented in the schema. The description provides no information about these parameters, failing to compensate for the schema gap. It does not explain what 'domain' refers to, the purpose of 'x-request-id', or the structure of 'cancelDomainBody'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is tautological, merely restating the tool name and title ('Cancel a specific domain - Cancel a specific domain'). It specifies the verb ('Cancel') and resource ('domain'), but fails to differentiate from sibling tools like 'cancelInstance' or 'revokeCancelDomain', offering no additional clarity beyond what's already obvious from the name and title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, appropriate contexts, or exclusions, such as how it differs from 'revokeCancelDomain' or when cancellation is irreversible. This leaves the agent without necessary usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
cancelInstanceCancel specific instance by idCDestructiveInspect
Cancel specific instance by id - Your are free to cancel a previously created instance at any time.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| cancelInstanceBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, covering safety. The description adds that cancellation applies to 'previously created instance' and 'at any time', offering some context. However, it doesn't detail irreversible effects, billing implications, or response behavior, leaving gaps despite annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core action, avoiding redundancy. However, the second sentence ('Your are free...') is slightly awkward and could be more precise, but overall it's efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 4 parameters (0% schema coverage), no output schema, and complex siblings, the description is inadequate. It lacks parameter explanations, behavioral details (e.g., cancellation effects), and usage context, failing to compensate for missing structured data.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but fails to explain parameters. It mentions 'by id' (hinting at instanceId) but ignores x-request-id, x-trace-id, and cancelInstanceBody (including cancelDate). No semantic details are provided beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('cancel') and target ('specific instance by id'), making the purpose understandable. However, it doesn't differentiate from sibling tools like 'deleteInstance' (which doesn't exist but similar destructive operations like 'deleteUser' do), missing explicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance ('at any time'), but lacks explicit when-to-use context, prerequisites (e.g., instance must be running), or alternatives (e.g., vs. shutdown or delete). No comparison to siblings like 'deleteInstance' or 'shutdown' is made.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
CancelObjectStorageCancels the specified object storage at the next possible dateCDestructiveInspect
Cancels the specified object storage at the next possible date - Cancels the specified object storage at the next possible date. Please be aware of your contract periods.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| objectStorageId | Yes | ||
| CancelObjectStorageBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, covering the safety profile. The description adds context about cancellation timing ('at the next possible date') and contract periods, which are useful behavioral details not in annotations. However, it doesn't disclose potential side effects like data loss or billing changes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is inefficiently repetitive ('Cancels the specified object storage at the next possible date' is stated twice) and lacks front-loading of key information. It wastes space without adding value, though it's not excessively long.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 4 parameters, 0% schema coverage, and no output schema, the description is inadequate. It misses critical details like parameter purposes, cancellation effects, error conditions, and return values. The context signals indicate high complexity, but the description doesn't address this.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for 4 parameters. It only mentions 'object storage' vaguely, without explaining parameters like objectStorageId, x-request-id, or CancelObjectStorageBody.cancelDate. The description fails to add meaningful semantics beyond what the schema names imply.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool cancels object storage at the next possible date, which provides a basic verb+resource. However, it's vague about what 'cancels' entails (termination vs. suspension) and doesn't distinguish from sibling tools like cancelDomain or cancelInstance beyond the resource type. The repetition in the description adds no clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions 'Please be aware of your contract periods,' which implies a constraint but doesn't explicitly state when to use this tool versus alternatives like deleteObjectStorage (not in siblings) or updateObjectStorage. No guidance on prerequisites, timing, or comparisons with similar cancellation tools is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
confirmDomainTransferOutConfirm transfer out for a domainCDestructiveInspect
Confirm transfer out for a domain - Confirm transfer out for a domain
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which already signal this is a mutating, potentially irreversible operation. The description adds no behavioral details beyond this, such as what 'confirm' entails, whether it's reversible, or any side effects. However, it doesn't contradict the annotations, so it meets the lower bar with annotations present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is overly concise to the point of being unhelpful, consisting of a repetitive phrase that adds no value. While it's brief, it's not effectively structured or front-loaded with useful information, making it inefficient rather than concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a destructive domain operation with three parameters, no output schema, and 0% schema coverage), the description is severely inadequate. It lacks essential details about the tool's purpose, parameters, behavior, and output, failing to provide a complete picture for safe and correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the three parameters (domain, x-request-id, x-trace-id) are documented in the schema. The description provides no information about these parameters, failing to compensate for the schema's lack of documentation, which is critical for understanding required inputs like the UUID-patterned x-request-id.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is a tautology that merely restates the tool name and title ('Confirm transfer out for a domain - Confirm transfer out for a domain'), providing no additional clarity about what the tool actually does. It doesn't specify what 'confirm transfer out' entails operationally or how it differs from sibling tools like 'cancelDomain' or 'revokeDomainTransferOut'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus alternatives. There are no mentions of prerequisites, timing considerations, or distinctions from related tools like 'cancelDomain' or 'revokeDomainTransferOut', leaving the agent with no usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createAssignmentCreate a new assignment for the tagCDestructiveInspect
Create a new assignment for the tag - Create a new tag assignment. This marks the specified resource with the specified tag for organizing purposes or to restrict access to that resource.
| Name | Required | Description | Default |
|---|---|---|---|
| tagId | Yes | ||
| resourceId | Yes | ||
| x-trace-id | No | ||
| resourceType | Yes | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description aligns with by implying a write operation ('create') and potential access restriction effects. However, it doesn't add significant behavioral context beyond annotations, such as rate limits, idempotency, or error conditions. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with some redundancy (repeats 'create a new assignment'), but it's front-loaded with the core purpose. It could be more concise by merging ideas, yet it avoids excessive verbosity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 5 parameters (0% schema coverage) and no output schema, the description is incomplete. It lacks details on parameter meanings, expected outcomes, error handling, and how it fits with sibling tools like 'deleteAssignment', making it insufficient for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for 5 parameters. It only vaguely references 'specified resource' and 'specified tag', without explaining parameter roles (e.g., tagId, resourceType, resourceId), formats, or constraints like x-request-id's UUID pattern. This leaves key semantics undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('create') and resource ('tag assignment'), specifying it marks a resource with a tag for organizing or access restriction. It distinguishes from sibling tools like 'createTag' (creates tag itself) and 'deleteAssignment' (removes assignment), but doesn't explicitly contrast with 'retrieveAssignment' or 'retrieveAssignmentList' for read operations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives. It mentions purposes (organizing or restricting access) but doesn't specify prerequisites (e.g., tag and resource must exist), exclusions, or direct comparisons to sibling tools like 'updateTag' for modifying tags or 'retrieveAssignment' for checking assignments.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createCustomImageProvide a custom imageBDestructiveInspect
Provide a custom image - In order to provide a custom image please specify an URL from where the image can be directly downloaded. A custom image must be in either .iso or .qcow2 format. Other formats will be rejected. Please note that downloading can take a while depending on network speed resp. bandwidth and size of image. You can check the status by retrieving information about the image via a GET request. Download will be rejected if you have exceeded your limits.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createCustomImageBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description aligns with by describing a creation/download process. The description adds valuable behavioral context beyond annotations: it notes format restrictions, potential download delays, status checking via GET, and download limits. This compensates well for the lack of annotations on these specific behaviors.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise but could be better structured. It front-loads the core action but includes somewhat verbose notes (e.g., 'depending on network speed resp. bandwidth'). Sentences are mostly relevant, but the flow could be improved for clarity, such as separating requirements from behavioral notes more distinctly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive creation with nested inputs) and lack of output schema, the description is partially complete. It covers key behaviors like format constraints and status checking, but omits details on error handling, response format, or interactions with sibling tools (e.g., how this relates to 'retrieveImage'). More context on outcomes and integration would enhance completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It only mentions the 'url' parameter explicitly and vaguely references 'limits' without detailing parameters like 'x-request-id' or 'createCustomImageBody' fields (e.g., 'name', 'osType'). The description fails to add meaningful semantics for most parameters, leaving significant gaps in understanding inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Provide a custom image' with the requirement to specify a downloadable URL. It distinguishes itself from sibling tools like 'retrieveImage' or 'deleteImage' by focusing on creation/upload. However, it doesn't explicitly contrast with 'createInstance' or 'createSnapshot' which might involve images, leaving some sibling differentiation incomplete.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by specifying format requirements ('.iso' or '.qcow2') and mentioning that status can be checked via a GET request (likely referring to 'retrieveImage'). However, it doesn't explicitly state when to use this tool versus alternatives like 'createInstance' (which might use pre-existing images) or warn against misuse. Guidelines are present but not comprehensive.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createDnsZoneCreate DNS zoneCDestructiveInspect
Create DNS zone - Creates a new DNS zone for a customer
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createDnsZoneBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description doesn't contradict (it describes a creation operation, not read-only). The description adds that this creates 'for a customer,' providing context about ownership that annotations don't cover. However, it lacks details on permissions needed, rate limits, or what 'destructive' means in this context (e.g., if it overwrites existing zones). With annotations covering safety basics, the description adds some value but could be more informative.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief (one sentence) and front-loaded with the core action, which is efficient. However, it's arguably too concise—it repeats the title ('Create DNS zone') and adds minimal value in the second clause. While not verbose, it under-specifies key details, making the conciseness feel more like omission than precision.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive creation tool with 3 parameters (0% schema coverage), no output schema, and complex siblings, the description is inadequate. It doesn't explain parameters, return values, error conditions, or how it fits into the broader DNS management context. Annotations provide basic safety hints, but the description fails to address the tool's operational context, leaving significant gaps for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden for explaining parameters. It mentions none of the 3 parameters (x-trace-id, x-request-id, createDnsZoneBody with zoneName). The description doesn't explain what 'zoneName' should be, what the request IDs are for, or any parameter constraints. This leaves critical input semantics undocumented, failing to compensate for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('create') and resource ('DNS zone'), which is clear but minimal. It adds 'for a customer' which provides some context, but doesn't differentiate from sibling tools like 'createDnsZoneRecord' or explain what a DNS zone is versus a DNS record. The description is functional but lacks specificity about what distinguishes this creation operation from other DNS-related creations.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing a domain first), when not to use it, or how it relates to sibling tools like 'createDnsZoneRecord' or 'deleteDnsZone'. The phrase 'for a customer' hints at some context but doesn't provide actionable usage rules or comparisons.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createDnsZoneRecordCreate DNS zone recordCDestructiveInspect
Create DNS zone record - Create resource record in a zone
| Name | Required | Description | Default |
|---|---|---|---|
| zoneName | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createDnsZoneRecordBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, showing it's a mutation with destructive potential. The description adds no behavioral context beyond this, such as what gets destroyed, auth needs, rate limits, or side effects. It doesn't contradict annotations, but provides minimal value beyond them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but under-specified and repetitive ('Create DNS zone record - Create resource record in a zone'). It wastes space with tautology instead of adding useful information, making it inefficient despite its short length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 4 parameters, 0% schema coverage, no output schema, and complex nested objects, the description is inadequate. It lacks details on behavior, parameters, usage, or expected outcomes, leaving significant gaps for an AI agent to understand and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning no parameters are documented in the schema. The description adds no parameter information—it doesn't explain 'zoneName', 'createDnsZoneRecordBody', or their sub-properties (e.g., 'type', 'ttl'). This fails to compensate for the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('Create DNS zone record') and resource ('resource record in a zone'), but it's repetitive and vague. It doesn't specify what a 'resource record' entails or differentiate from siblings like 'createDnsZone' or 'updateDnsZoneRecord', leaving the purpose unclear beyond basic verb+resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing DNS zone), exclusions, or compare to siblings like 'bulkDeleteDnsZoneRecords' or 'updateDnsZoneRecord', offering no usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createHandleCreate specific handleDDestructiveInspect
Create specific handle - Create specific handle
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createHandleBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a destructive write operation (readOnlyHint: false, destructiveHint: true), but the description adds no behavioral context beyond what annotations already provide. It doesn't explain what 'destructive' means in this context, what permissions are required, whether the operation is idempotent, or what happens on success/failure. The description doesn't contradict annotations but fails to add meaningful behavioral information.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While technically concise, the description is under-specified rather than efficiently informative. The repetition 'Create specific handle - Create specific handle' wastes space without adding value. It lacks any meaningful structure or front-loading of important information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive write operation with 3 parameters (including a complex nested object), 0% schema description coverage, and no output schema, the description is completely inadequate. It provides no information about what the tool does, what parameters mean, what behavior to expect, or what results are returned. The description fails to compensate for the lack of structured documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 3 parameters (including a complex nested object), the description provides absolutely no information about parameters. It doesn't mention any of the required fields like 'handleType', 'firstName', 'lastName', 'email', 'gender', 'address', or 'phone', nor does it explain what a 'handle' represents or what data should be provided.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Create specific handle - Create specific handle' is a tautology that merely restates the tool name and title. It provides no meaningful information about what the tool actually does, what a 'handle' represents in this context, or how it differs from sibling tools like 'createUser' or 'createRole'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description offers no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, appropriate contexts, or how it relates to sibling tools like 'createUser', 'createRole', or 'listHandles'. There's no indication of when this tool should or shouldn't be used.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createInstanceCreate a new instanceCDestructiveInspect
Create a new instance - Create a new instance for your account with the provided parameters. ProductIdProductDisk Size V91VPS 10 NVMe75 GB NVMe V92VPS 10 SSD150 GB SSD V93VPS 10 Storage300 GB SSD V94VPS 20 NVMe100 GB NVMe V95VPS 20 SSD200 GB SSD V96VPS 20 Storage400 GB SSD V97VPS 30 NVMe200 GB NVMe V98VPS 30 SSD400 GB SSD V99VPS 30 Storage1000 GB NVMe V100VPS 40 NVMe250 GB NVMe V101VPS 40 SSD500 GB SSD V102VPS 40 Storage1200 GB NVMe V103VPS 50 NVMe300 GB NVMe V104VPS 50 SSD600 GB SSD V105VPS 50 Storage1400 GB SSD V106VPS 60 NVMe350 GB NVMe V107VPS 60 SSD700 GB SSD V8VDS S180 GB NVMe V9VDS M240 GB NVMe V10VDS L360 GB NVMe V11VDS XL480 GB NVMe V16VDS XXL720 GB NVMe
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createInstanceBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which correctly align with the 'create' action. The description adds the product table showing available instance types, which provides useful context beyond annotations about what can be created. However, it doesn't mention behavioral aspects like whether creation is immediate, asynchronous, or has rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with redundant phrasing ('Create a new instance - Create a new instance') and includes a massive HTML table that makes it verbose and difficult to parse. The core information is buried in markup rather than being front-loaded in clear prose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive creation tool with complex nested parameters and no output schema, the description is incomplete. While the product table helps with productId selection, it doesn't address other critical parameters, expected return values, error conditions, or dependencies on other tools like 'createSecret' for SSH keys. The annotations provide basic safety context but more operational guidance is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage (parameters have no descriptions in the schema), the description carries the full burden. The product table provides crucial semantic information about valid productId values and their corresponding configurations, which compensates significantly for the schema gap. However, it doesn't explain other required parameters like 'period' or 'createInstanceBody'.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Create a new instance for your account with the provided parameters', which is a tautology that essentially restates the tool name and title. It doesn't specify what type of instance (compute/VPS/VDS) or distinguish it from sibling tools like 'createObjectStorage' or 'createPrivateNetwork' that also create resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives. While the table shows product options, it doesn't explain prerequisites, when this should be used instead of other creation tools, or any constraints like account limits or billing implications.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createObjectStorageCreate a new object storageADestructiveInspect
Create a new object storage - Create / purchase a new object storage in your account. Please note that you can only buy one object storage per location. You can actually increase the object storage space via POST to /v1/object-storages/{objectStorageId}/resize
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createObjectStorageBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description aligns with by describing a creation/purchase action (non-read-only) and implying a financial commitment. The description adds valuable behavioral context beyond annotations: it notes the 'one per location' constraint and references a resize endpoint, which helps the agent understand limitations and related operations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the main action and includes two additional sentences that provide important constraints and alternatives. It avoids unnecessary fluff, but the second sentence could be more streamlined (e.g., merging the constraint and alternative into one clearer statement).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive creation with financial implications), no output schema, and 0% schema description coverage, the description is incomplete. It covers the core action and some behavioral constraints but lacks details on parameters, error conditions, or return values, leaving gaps for the agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The tool description does not mention any parameters (e.g., region, totalPurchasedSpaceTB) or their meanings, leaving them undocumented. However, with 3 parameters and no schema descriptions, the baseline is low, and the description fails to compensate, resulting in a minimal score.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create / purchase a new object storage') and resource ('object storage in your account'), making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like 'createInstance' or 'createPrivateNetwork' beyond mentioning the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context by noting 'you can only buy one object storage per location' and mentions an alternative action ('increase the object storage space via POST...'), which implies when not to use this tool. However, it lacks explicit guidance on when to choose this over other creation tools (e.g., 'createInstance') or prerequisites like authentication needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createPrivateNetworkCreate a new Private NetworkDDestructiveInspect
Create a new Private Network - Create a new Private Network in your account.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createPrivateNetworkBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which already inform the agent that this is a write operation with potential destructive effects. The description does not contradict these annotations, as 'Create' aligns with a write operation. However, it adds no behavioral context beyond what annotations provide, such as what gets destroyed (e.g., resource limits, costs), authentication needs, or rate limits. With annotations covering the core safety profile, the description adds minimal value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is overly repetitive ('Create a new Private Network - Create a new Private Network in your account'), wasting space without adding value. It is front-loaded but with redundant phrasing. While brief, it lacks meaningful structure or efficiency, as the repetition does not serve any informative purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a destructive write operation with 3 parameters and nested objects) and the absence of an output schema, the description is insufficient. It does not explain what the tool returns, potential errors, or the impact of the destructive hint. With 0% schema coverage and no output schema, the description should provide more context but fails to do so, leaving significant gaps in understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the parameters (x-request-id, createPrivateNetworkBody with name, region, description) are documented in the schema. The description provides no information about these parameters, their purposes, or how they affect the creation process. It fails to compensate for the lack of schema documentation, leaving all parameters unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is a tautology that essentially restates the name and title ('Create a new Private Network - Create a new Private Network in your account'). It does not provide any specific details about what a Private Network is or what resources it creates. While it mentions the resource (Private Network), it lacks a clear, distinct purpose statement that differentiates it from siblings like 'createInstance' or 'createObjectStorage'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., account permissions, available resources), typical use cases, or comparisons to sibling tools like 'patchPrivateNetwork' or 'deletePrivateNetwork'. The description offers no context for decision-making.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createPtrRecordCreate a new PTR Record using ip addressCDestructiveInspect
Create a new PTR Record using ip address - Create a new PTR Record using ip address. Only IPv6 can be created
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createPtrRecordBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description doesn't contradict—it implies creation (a write operation) and doesn't claim safety. However, the description adds minimal behavioral context beyond annotations: it specifies 'Only IPv6 can be created', which is a constraint not covered by annotations. It lacks details on permissions, side effects, or response behavior, leaving gaps for a destructive tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive and inefficient, starting with a redundant phrase ('Create a new PTR Record using ip address - Create a new PTR Record using ip address.') before adding the IPv6 constraint. It's not front-loaded with key information and wastes space on tautology rather than providing useful details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a destructive tool with 3 parameters, 0% schema coverage, no output schema, and complex nested objects, the description is inadequate. It doesn't explain what a PTR record is, the creation process, expected inputs/outputs, or error conditions. The IPv6 constraint is noted, but overall, it lacks the depth needed for effective tool use in this context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning no parameters have descriptions in the schema. The description provides no information about parameters beyond implying 'ip address' is used. It doesn't explain the three required parameters (x-request-id, createPtrRecordBody with ptr, ip, ttl), their purposes, formats, or relationships. This fails to compensate for the schema's lack of documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is tautological, repeating the title verbatim ('Create a new PTR Record using ip address') and adding only a minor constraint ('Only IPv6 can be created'). It states the verb (create) and resource (PTR Record) but lacks specificity about what a PTR record is or its DNS context. It doesn't distinguish from siblings like 'createDnsZoneRecord' or 'updatePtrRecord' beyond the IPv6 limitation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description mentions 'Only IPv6 can be created', which hints at a constraint but doesn't clarify when to choose this over other DNS-related tools like 'createDnsZoneRecord' or 'updatePtrRecord'. There's no mention of prerequisites, dependencies, or typical use cases.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createRoleCreate a new roleCDestructiveInspect
Create a new role - Create a new role. In order to get a list availbale api enpoints (apiName) and their actions please refer to the GET api-permissions endpoint. For specifying resources please enter tag ids. For those to take effect please assign them to a resource in the tag management api.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createRoleBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description doesn't contradict. The description adds some behavioral context about where to find API endpoints and how resources work with tags, but doesn't elaborate on the destructive nature (what gets modified/permanently changed), authentication requirements, or error conditions. It provides partial value beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured with redundant phrasing ('Create a new role - Create a new role'). While it attempts to be concise, it under-specifies critical information. The sentences about API permissions and tag management are useful but feel tacked on without clear organization, making it inefficient rather than truly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive creation tool with 3 parameters, nested objects, no output schema, and 0% schema coverage, the description is inadequate. It misses explanations of key inputs, behavioral implications, expected outcomes, and error handling. The mention of external endpoints doesn't compensate for the lack of core tool context needed for proper invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries full burden for parameter explanation. It only mentions 'apiName' and 'resources' (tag ids) briefly, ignoring the 3 parameters and nested structure. Key parameters like 'name', 'admin', 'accessAllResources', and 'permissions' array aren't addressed, leaving significant gaps in understanding what inputs are needed.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool creates a new role, which is a specific verb+resource. However, it doesn't differentiate from sibling tools like 'updateRole' or 'deleteRole' beyond the basic action. The description repeats 'Create a new role' twice, adding redundancy rather than clarity about what distinguishes this creation operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'updateRole' or 'deleteRole'. It mentions referring to another endpoint for API permissions and tag management for resources, but these are implementation details rather than usage context. There's no mention of prerequisites, constraints, or typical scenarios for role creation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createSecretCreate a new secretCDestructiveInspect
Create a new secret - Create a new secret in your account with attributes name, type and value. Attribute type can be password or ssh.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createSecretBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description doesn't contradict—it correctly implies a write operation ('Create'). However, the description adds minimal behavioral context beyond annotations: it mentions attributes (name, type, value) but doesn't disclose critical behaviors like authentication needs, rate limits, or what 'destructive' entails (e.g., overwriting existing secrets).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the main action, but it's somewhat redundant (repeats 'Create a new secret'). The second sentence adds parameter details efficiently, but overall structure could be improved by avoiding repetition and integrating information more cohesively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (3 parameters, destructive operation, no output schema), the description is insufficient. It lacks information on return values, error conditions, authentication requirements, and doesn't fully explain parameters or usage context. With annotations covering only read/write and destructiveness, more behavioral and operational details are needed for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It lists parameters (name, type, value) and explains type options (password or ssh), adding some semantics. However, it omits details on 'x-request-id' and 'x-trace-id' parameters, and doesn't clarify parameter constraints (e.g., value complexity rules from schema). The description partially addresses the coverage gap but leaves key parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new secret') and specifies the resource ('in your account'), which matches the tool name and title. It distinguishes itself from siblings like 'deleteSecret' and 'updateSecret' by focusing on creation. However, it doesn't explicitly differentiate from other creation tools (e.g., 'createInstance') beyond the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, appropriate contexts, or when not to use it. For example, it doesn't clarify if this should be used instead of 'updateSecret' for new secrets or how it relates to 'generateClientSecret'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createSnapshotCreate a new instance snapshotCDestructiveInspect
Create a new instance snapshot - Create a new snapshot for instance, with name and description attributes
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createSnapshotBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description doesn't contradict. However, it adds no behavioral context beyond this, such as explaining what 'destructive' entails (e.g., potential impact on instance performance), rate limits, or authentication needs, leaving gaps despite annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, but it includes redundant phrasing ('Create a new instance snapshot - Create a new snapshot') that wastes space. It could be more efficient by eliminating repetition and focusing on unique aspects.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and low schema coverage, the description is inadequate. It lacks details on return values, error conditions, or operational constraints, failing to provide a complete picture for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate, but it only vaguely mentions 'name and description attributes' without detailing parameters like 'instanceId', 'x-request-id', or nested object specifics. This adds minimal value beyond the schema's structural information.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool creates a new snapshot for an instance with name and description attributes, which clarifies the verb and resource. However, it doesn't differentiate from sibling tools like 'rollbackSnapshot' or 'updateSnapshot', and the phrasing is somewhat redundant with the title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'createCustomImage' or 'rollbackSnapshot'. The description lacks context about prerequisites, such as needing an existing instance, or exclusions, making it minimal in usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createTagCreate a new tagCDestructiveInspect
Create a new tag - Create a new tag in your account with attribute name and optional attribute color.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createTagBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description does not contradict—it implies a creation action. The description adds minimal behavioral context by mentioning attributes (name and optional color), but does not elaborate on permissions, rate limits, or what 'destructive' entails (e.g., if it overwrites existing tags). With annotations covering safety, it adds some value but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the main action, but it is somewhat redundant ('Create a new tag - Create a new tag'). It could be more efficient by avoiding repetition and better structuring the attribute details. However, it is not overly verbose and conveys the core idea in one sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (creation with destructive hint, 3 parameters, no output schema), the description is inadequate. It lacks details on behavioral traits (e.g., what destruction means), parameter meanings beyond name/color, and usage context. With annotations providing some safety info but no output schema, the description should do more to explain the tool's operation and expected outcomes.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description mentions 'attribute name and optional attribute color', which partially covers two parameters (name and color) but omits the third parameter (description) and does not explain the required 'x-request-id' or optional 'x-trace-id'. It fails to compensate for the low coverage, leaving key parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Create a new tag') and specifies the resource ('in your account'), which is a specific verb+resource combination. However, it does not explicitly differentiate from sibling tools like 'updateTag' or 'deleteTag', though the action is distinct. The description is not tautological as it adds details beyond the name/title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'updateTag' for modifying existing tags or 'deleteTag' for removal. It lacks context about prerequisites, like whether the tag name must be unique or if there are limits on tag creation, and does not mention any exclusions or specific scenarios for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
createUserCreate a new userADestructiveInspect
Create a new user - Create a new user with required attributes name, email, enabled, totp (=Two-factor authentication 2FA), admin (=access to all endpoints and resources), accessAllResources and roles. You can't specify any password / secrets for the user. For security reasons the user will have to specify secrets on his own.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| createUserBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond the annotations. While annotations indicate this is a destructive write operation (readOnlyHint=false, destructiveHint=true), the description explains the security constraint about password specification and that users must set their own secrets. This provides important operational context that the annotations alone don't convey.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that each serve a clear purpose: the first states the action and key attributes, the second explains the security constraint. It's front-loaded with the core functionality and wastes no words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive creation tool with no output schema and poor schema documentation (0% coverage), the description does a reasonably complete job. It explains the core functionality, lists key attributes, and provides important security context. However, it could be more complete by explaining the response format or error conditions.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the schema provides no parameter descriptions. The description lists several attributes (name, email, enabled, totp, admin, accessAllResources, roles) but doesn't fully explain all parameters from the input schema. It adds some semantic value but doesn't completely compensate for the lack of schema descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Create a new user') and distinguishes it from siblings by specifying the required attributes. It explicitly mentions what cannot be done ('You can't specify any password / secrets for the user'), which helps differentiate it from potential password-setting tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context about when to use this tool (to create a new user with specific attributes) and includes an important security constraint about not specifying passwords. However, it doesn't explicitly mention when NOT to use it or name specific alternatives among the sibling tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deleteAssignmentDelete existing tag assignmentBDestructiveInspect
Delete existing tag assignment - Tag assignment will be removed from the specified resource. If this tag is being used for access restrictions the affected users will no longer be able to access that resource.
| Name | Required | Description | Default |
|---|---|---|---|
| tagId | Yes | ||
| resourceId | Yes | ||
| x-trace-id | No | ||
| resourceType | Yes | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description aligns with by describing a deletion action. The description adds valuable context beyond annotations: it specifies that the tag is removed from a resource and warns about potential access restriction impacts. This helps the agent understand side effects not captured in structured fields.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with no wasted words. The first sentence restates the purpose, and the second adds critical behavioral context about access restrictions. It's front-loaded and efficiently structured, though slightly repetitive with the title.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and 0% parameter coverage, the description is moderately complete. It covers the core action and a key side effect, but lacks parameter explanations, success/failure conditions, or error handling. Annotations help by marking it as destructive, but more behavioral detail would improve completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 5 parameters have descriptions in the schema. The tool description provides no information about parameters like tagId, resourceType, or resourceId, failing to compensate for the lack of schema documentation. This leaves the agent guessing about parameter meanings and formats.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete existing tag assignment') and specifies the resource affected ('tag assignment will be removed from the specified resource'), which matches the tool name and title. However, it doesn't explicitly differentiate from sibling tools like 'deleteTag' (which likely deletes the tag itself rather than its assignment) or 'createAssignment' (its counterpart).
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions a consequence ('If this tag is being used for access restrictions...'), but this is behavioral information, not usage guidance. There's no mention of prerequisites, when to choose this over other deletion tools, or contextual triggers.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deleteDnsZoneDelete a DNS zone.CDestructiveInspect
Delete a DNS zone. - Delete a DNS Zone using zone name.
| Name | Required | Description | Default |
|---|---|---|---|
| zoneName | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a destructive, non-read-only operation, which the description aligns with by using 'Delete'. The description doesn't add behavioral details beyond this (e.g., irreversibility, confirmation steps, or error conditions), but since annotations cover the safety profile, the bar is lower. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but inefficient—the second sentence repeats the first without adding value. It's front-loaded with the core action but wastes space on redundancy. A single, clearer sentence would improve conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 3 parameters (0% schema coverage) and no output schema, the description is inadequate. It lacks details on parameter usage, expected outcomes, error handling, or dependencies (e.g., relation to DNS records). Annotations help but don't fill these gaps, making the tool hard to use correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description mentions 'zone name' but doesn't explain what 'zoneName' is, its format, or the purpose of 'x-request-id' and 'x-trace-id'. It fails to compensate for the lack of schema documentation, leaving key parameters unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('Delete a DNS zone') and resource ('DNS zone'), which is clear but basic. It doesn't distinguish this tool from its sibling 'bulkDeleteDnsZoneRecords' or explain what a DNS zone deletion entails versus other DNS operations. The second sentence is redundant, adding no new information.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'bulkDeleteDnsZoneRecords' or 'deleteDnsZoneRecord'. There's no mention of prerequisites (e.g., needing to delete records first) or consequences (e.g., impact on domains). The description offers only the basic action without context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deleteDnsZoneRecordDelete a DNS zone recordCDestructiveInspect
Delete a DNS zone record - Delete a DNZ Zone's record
| Name | Required | Description | Default |
|---|---|---|---|
| recordId | Yes | ||
| zoneName | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description doesn't add behavioral context beyond what annotations provide. Annotations already indicate this is destructive (destructiveHint: true) and not read-only (readOnlyHint: false). The description doesn't mention permissions needed, irreversible consequences, rate limits, or what happens to DNS resolution after deletion.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely brief (essentially one repeated phrase), which could be seen as concise. However, it's under-specified rather than efficiently informative - it wastes space repeating itself ('Delete a DNS zone record - Delete a DNZ Zone's record') instead of providing useful content.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with 4 parameters (0% documented in schema) and no output schema, the description is severely inadequate. It doesn't explain what gets deleted, how to identify records, what happens after deletion, error conditions, or return values. The annotations help but don't compensate for the missing operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 4 parameters (3 required), the description provides zero information about what parameters mean or how to use them. It doesn't explain what 'recordId', 'zoneName', 'x-request-id', or 'x-trace-id' represent or how to obtain them.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is tautological - it essentially restates the tool name/title ('Delete a DNS zone record') without adding meaningful specificity. While it mentions the resource (DNS zone record), it doesn't distinguish this from sibling tools like 'bulkDeleteDnsZoneRecords' or 'deleteDnsZone', nor does it specify what type of DNS record is affected.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives. There's no mention of prerequisites (like needing the record ID), when not to use it, or how it differs from related tools like 'bulkDeleteDnsZoneRecords' or 'updateDnsZoneRecord'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deleteImageDelete an uploaded custom image by its idCDestructiveInspect
Delete an uploaded custom image by its id - Your are free to delete a previously uploaded custom images at any time
| Name | Required | Description | Default |
|---|---|---|---|
| imageId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, already signaling a destructive write operation. The description adds context by confirming deletion is allowed 'at any time', which hints at no time-based restrictions, but doesn't detail authentication needs, rate limits, or irreversible effects beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core action, but includes a redundant phrase ('Your are free to delete a previously uploaded custom images at any time') that could be trimmed without losing meaning. Overall, it's efficient but not maximally concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 3 parameters (0% schema coverage), no output schema, and annotations only covering safety, the description is insufficient. It lacks details on parameter usage, error conditions, or what happens post-deletion (e.g., confirmation, side effects), making it incomplete for safe agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'by its id', which aligns with the 'imageId' parameter, but doesn't explain 'x-request-id' or 'x-trace-id' (e.g., their purposes or formats). This leaves two parameters inadequately covered.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and resource ('uploaded custom image'), with the title specifying 'by its id'. It distinguishes from siblings like 'deleteSnapshot' or 'deleteDnsZone' by focusing on images, but doesn't explicitly differentiate from 'updateImage' or 'retrieveImage' in usage context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance with 'Your are free to delete a previously uploaded custom images at any time', which implies no restrictions but doesn't specify when to use this over alternatives like 'updateImage' or prerequisites. No explicit when-not-to-use or sibling tool comparisons are included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deletePrivateNetworkDelete existing Private Network by idADestructiveInspect
Delete existing Private Network by id - Delete existing Virtual Private Cloud by id and automatically unassign all instances from it
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| privateNetworkId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false and destructiveHint=true, indicating it's a destructive mutation. The description adds valuable context beyond annotations by specifying that deletion 'automatically unassign all instances from it', which clarifies side-effects not captured in annotations. No contradictions with annotations exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action and efficiently adds critical behavioral detail in a single sentence. No redundant information is present, though it could be slightly more structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and 0% schema description coverage, the description adequately covers the main action and a key side-effect. However, it lacks details on error conditions, response format, or implications of the x-* parameters, leaving gaps in full contextual understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description mentions 'by id' which corresponds to 'privateNetworkId', but doesn't explain the other two parameters (x-request-id and x-trace-id) or their purposes. It adds minimal semantic value beyond what's inferable from parameter names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Delete'), the resource ('existing Private Network/Virtual Private Cloud'), and the identifier ('by id'). It distinguishes from siblings like 'deleteDnsZone' or 'deleteSnapshot' by specifying the exact resource type, and from 'unassignInstancePrivateNetwork' by indicating it's a full deletion with automatic unassignment.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing to delete a private network and unassign instances, but doesn't explicitly state when to use this versus alternatives like 'patchPrivateNetwork' for modifications or 'unassignInstancePrivateNetwork' for partial removal. No prerequisites or exclusions are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deletePtrRecordDelete a PTR Record using ip addressCDestructiveInspect
Delete a PTR Record using ip address - Delete a PTR Record using ip address. Only IPv6 can be deleted
| Name | Required | Description | Default |
|---|---|---|---|
| ipAddress | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which already tell the agent this is a destructive write operation. The description adds that it's for deleting PTR records and specifies 'Only IPv6 can be deleted', which provides useful behavioral context beyond annotations. However, it doesn't mention side effects, permissions needed, or error conditions, leaving gaps in transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is poorly structured and repetitive: 'Delete a PTR Record using ip address - Delete a PTR Record using ip address. Only IPv6 can be deleted'. The first part is duplicated, wasting space. While short, it's not front-loaded with critical info and includes redundancy instead of adding value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 3 parameters (0% schema coverage) and no output schema, the description is inadequate. It mentions the IPv6 constraint but doesn't explain parameters, error handling, or what happens on success. Annotations cover safety profile, but the description doesn't add enough context to make this tool fully understandable to an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 3 parameters (ipAddress, x-request-id, x-trace-id) are documented in the schema. The description only mentions 'ip address' generically, without explaining what format it expects, what the x-* parameters are for, or any constraints. This fails to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool deletes a PTR record using an IP address, which is a clear verb+resource combination. However, it repeats the same phrase twice ('Delete a PTR Record using ip address' appears verbatim twice), making it somewhat redundant. It distinguishes from siblings like 'createPtrRecord' and 'updatePtrRecord' by specifying deletion, but doesn't clearly differentiate from other deletion tools like 'deleteDnsZoneRecord' beyond the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions 'Only IPv6 can be deleted', which is a constraint but not usage guidance. There's no mention of prerequisites, when to choose this over other deletion tools, or what happens after deletion. Siblings include 'deleteDnsZoneRecord' and 'unassignIp', but no comparison is made.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deleteRoleDelete existing role by idADestructiveInspect
Delete existing role by id - You can't delete a role if it is still assigned to a user. In such cases please remove the role from the users.
| Name | Required | Description | Default |
|---|---|---|---|
| roleId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, covering safety. The description adds valuable behavioral context: the prerequisite condition (role must not be assigned) and the consequence (deletion fails if assigned). This goes beyond annotations by explaining failure modes.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two sentences with zero waste: the first states the purpose, the second provides critical usage constraint. It's front-loaded and efficiently structured, with every word earning its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and 0% schema coverage, the description is reasonably complete. It covers purpose, key constraint, and alternative action. However, it lacks details on response format or error handling, which could be helpful given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description mentions 'by id', implying the roleId parameter, but doesn't explain x-request-id or x-trace-id. It adds minimal semantics, compensating slightly but not fully for the coverage gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and target resource ('existing role by id'), making the purpose unambiguous. However, it doesn't differentiate from sibling tools like 'deleteUser' or 'deleteAssignment' beyond the resource type, missing explicit distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
It provides clear context on when NOT to use this tool ('You can't delete a role if it is still assigned to a user') and suggests an alternative action ('remove the role from the users'). This helps the agent avoid errors, though it doesn't explicitly name sibling alternatives like 'updateRole' for reassignment.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deleteSecretDelete existing secret by idCDestructiveInspect
Delete existing secret by id - You can remove a specific secret from your account.
| Name | Required | Description | Default |
|---|---|---|---|
| secretId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=false and destructiveHint=true, so the agent knows this is a destructive write operation. The description adds some context by confirming it's a deletion action and specifying 'from your account,' but doesn't provide additional behavioral details like whether deletion is permanent/reversible, authentication requirements, or rate limits. With annotations covering the safety profile, this earns a baseline 3.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately brief with two concise sentences that get straight to the point. However, the second sentence ('You can remove a specific secret from your account') is somewhat redundant with the first, slightly reducing efficiency. Overall, it's well-structured and front-loaded.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with 3 parameters (2 required), 0% schema description coverage, and no output schema, the description is insufficient. It doesn't explain what happens after deletion (success response, error conditions), doesn't clarify the purpose of the x-* headers, and provides minimal behavioral context beyond what annotations already indicate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 3 parameters have descriptions in the schema. The description only mentions 'by id' which corresponds to the secretId parameter, leaving x-request-id and x-trace-id completely unexplained. For a tool with 3 parameters and 0% schema coverage, the description should do more to explain parameter purposes and relationships.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and target ('existing secret by id'), providing a specific verb+resource combination. However, it doesn't differentiate this tool from other deletion tools in the sibling list (like deleteAssignment, deleteDnsZone, deleteUser, etc.), missing an opportunity to specify what makes secret deletion unique.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance - it only states you can remove a specific secret from your account. It doesn't indicate when to use this versus alternatives (like updateSecret for modification or createSecret for creation), nor does it mention prerequisites (e.g., needing the secret ID from retrieveSecretList). No explicit when/when-not guidance is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deleteSnapshotDelete existing snapshot by idDDestructiveInspect
Delete existing snapshot by id - Delete existing instance snapshot by id
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| snapshotId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which already signal this is a destructive write operation. The description adds minimal behavioral context by repeating 'delete', but doesn't disclose additional traits like whether deletion is irreversible, requires specific permissions, affects associated resources, or has side effects. It doesn't contradict annotations, but adds little value beyond them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise but inefficiently structured—it repeats 'Delete existing snapshot by id' with slight variation, wasting words without adding clarity. It's front-loaded with the core action but fails to use its brevity to convey useful information. The repetition suggests under-specification rather than purposeful conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's destructive nature (per annotations), 4 parameters with 0% schema coverage, and no output schema, the description is highly incomplete. It doesn't explain what a snapshot is, the implications of deletion, parameter meanings, or expected outcomes. For a mutation tool with significant complexity, this description leaves critical gaps for an agent to operate safely and correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description provides no information about parameters—it doesn't explain what instanceId or snapshotId refer to, or the purpose of x-request-id and x-trace-id. For a tool with 4 parameters (3 required), this leaves the agent with no semantic guidance beyond raw schema types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is tautological, essentially restating the name/title ('Delete existing snapshot by id') with minor repetition. It doesn't specify what a 'snapshot' is in this context or what resource it operates on, beyond what's already implied by the name. While it includes the verb 'delete' and resource 'snapshot', it lacks specificity and doesn't distinguish this tool from sibling deletion tools like deleteImage or deleteDnsZone.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing snapshot), exclusions, or related tools like retrieveSnapshot or rollbackSnapshot from the sibling list. There's no context about when deletion is appropriate or what happens after deletion.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deleteTagDelete existing tag by idADestructiveInspect
Delete existing tag by id - Your tag can be deleted if it is not assigned to any resource on your account. Check tag assigments before deleting tag.
| Name | Required | Description | Default |
|---|---|---|---|
| tagId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations already indicate this is a destructive, non-read-only operation, which the description aligns with by using 'Delete.' The description adds valuable behavioral context beyond annotations by specifying the precondition for successful deletion (tag must not be assigned to any resource) and suggesting a prerequisite action (check assignments). This helps the agent understand failure conditions and proper workflow.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is extremely concise and front-loaded with the core purpose. Both sentences earn their place: the first states what the tool does, and the second provides crucial behavioral guidance. There's zero wasted verbiage or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and poor parameter documentation, the description does an adequate job covering the core operation and important behavioral constraints. However, it doesn't address what happens after deletion (success response, error formats), and leaves two parameters completely unexplained, which creates gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 3 parameters, the description carries full burden for parameter documentation. It only mentions 'tagId' (one of the three parameters) without explaining its format or purpose. The other two parameters (x-request-id and x-trace-id) are completely undocumented in both schema and description, leaving significant gaps in understanding required inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Delete') and resource ('existing tag by id'), making the purpose immediately understandable. However, it doesn't differentiate this tool from other delete operations in the sibling list (like deleteAssignment, deleteDnsZone, etc.) beyond specifying it's for tags.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear usage guidance by stating 'Your tag can be deleted if it is not assigned to any resource on your account' and advising to 'Check tag assignments before deleting tag.' This gives important context about when the operation will succeed versus fail. However, it doesn't explicitly mention alternatives or when not to use this tool.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
deleteUserDelete existing user by idADestructiveInspect
Delete existing user by id - By deleting a user he will not be able to access any endpoints or resources any longer. In order to temporarily disable a user please update its enabled attribute.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The description adds valuable behavioral context beyond annotations by explaining the consequence of deletion ('he will not be able to access any endpoints or resources any longer'), which clarifies the permanence and impact of the operation. While annotations already indicate destructiveHint=true, the description provides specific user impact details that annotations don't cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each serve distinct purposes: the first states the core functionality, the second provides crucial usage guidance. There's no wasted language or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with no output schema, the description provides essential context about the permanent consequences and alternative approaches. While it doesn't detail return values or error conditions, it addresses the most critical aspects for safe tool invocation given the destructive nature.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 3 parameters, the description mentions 'by id' which hints at the userId parameter but doesn't explain x-request-id or x-trace-id. It provides minimal semantic value beyond what's implied by the tool name, meeting the baseline expectation when schema coverage is low.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the specific action ('Delete'), target resource ('existing user'), and method ('by id'), distinguishing it from sibling tools like 'updateUser' or 'createUser'. It precisely defines what the tool does without being tautological.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description explicitly provides when-not-to-use guidance by stating 'In order to temporarily disable a user please update its `enabled` attribute', offering a clear alternative to this permanent deletion tool. This helps the agent choose between delete and update operations appropriately.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
generateClientSecretGenerate new client secretCDestructiveInspect
Generate new client secret - Generate and get new client secret.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which already inform the agent that this is a mutation with destructive effects. The description does not add any behavioral context beyond this—it doesn't explain what 'destructive' means here (e.g., invalidates previous secrets, requires re-authentication), nor does it mention rate limits, authentication needs, or side effects. However, it does not contradict the annotations, so it meets the lower bar with annotations present but adds minimal value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise but under-specified—it consists of a redundant phrase that doesn't add value. While it avoids verbosity, it wastes its limited space on repetition rather than informative content. It is not front-loaded with useful information, making it inefficient despite its brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a destructive mutation with 2 parameters), lack of output schema, and minimal annotations, the description is incomplete. It does not explain what a client secret is, what the output includes (e.g., the new secret value), error conditions, or security implications. For a tool with destructive hints and no output schema, more context is needed to guide proper use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 2 parameters (x-trace-id and x-request-id) with 0% description coverage, meaning the schema provides no semantic information about them. The description does not mention any parameters, their purposes (e.g., for tracing or request identification), or how they affect the operation. With low schema coverage, the description fails to compensate, leaving parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is tautological, essentially restating the tool name and title ('Generate new client secret - Generate and get new client secret'). It does not provide any additional specificity about what a 'client secret' is, what system it belongs to, or how it differs from sibling tools like 'createSecret' or 'regenerateObjectStorageCredentials'. The purpose is implied but not clearly articulated beyond repetition.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., existing client), exclusions, or compare it to sibling tools like 'createSecret' (which might create a secret initially) or 'regenerateObjectStorageCredentials' (which handles a different resource). The description offers zero contextual usage information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
getAuthCodeGet auth code for a domainDDestructiveInspect
Get auth code for a domain - Get auth code for a domain by id
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, suggesting this is a write operation with destructive effects. The description doesn't explain what 'destructive' means in this context (e.g., does it invalidate previous auth codes, trigger security changes, or have side effects?). It adds no behavioral context beyond the annotations, which is problematic since annotations alone don't clarify the nature of the destruction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While technically concise (one repetitive sentence), the description is under-specified rather than efficiently informative. It wastes space repeating itself ('Get auth code for a domain - Get auth code for a domain by id') instead of providing meaningful content. The structure doesn't front-load useful information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
This is a destructive operation (per annotations) with 3 undocumented parameters and no output schema. The description fails to explain what an auth code is, why it's needed, what the destructive effect entails, what the parameters mean, or what the tool returns. Given the complexity and lack of structured documentation, the description is completely inadequate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 3 parameters have descriptions in the schema. The tool description provides zero information about what 'domain', 'x-request-id', or 'x-trace-id' parameters represent, their expected formats, or their purposes. This leaves all parameters completely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is a tautology that essentially repeats the tool name and title ('Get auth code for a domain - Get auth code for a domain by id'). It specifies the verb (get) and resource (auth code for a domain), but provides no additional clarity about what an 'auth code' is used for or what domain context this applies to. It doesn't distinguish from siblings like 'retrieveDomain' or 'listDomains'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is absolutely no guidance on when to use this tool versus alternatives. The description doesn't mention prerequisites, when this operation is needed, or what distinguishes it from other domain-related tools like 'retrieveDomain', 'orderDomain', or 'cancelDomain'. The agent receives no contextual usage information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
getObjectStorageCredentialsGet S3 compatible object storage credentials.CRead-onlyInspect
Get S3 compatible object storage credentials. - Get S3 compatible object storage credentials for accessing it via S3 compatible tools like aws cli.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | ||
| x-trace-id | No | ||
| credentialId | Yes | ||
| x-request-id | Yes | ||
| objectStorageId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds context about S3 compatibility and tool usage, but does not disclose behavioral traits like authentication requirements, rate limits, or what data is returned. It does not contradict annotations, as 'Get' aligns with read-only.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise but inefficiently structured: it repeats the same phrase and includes a dash that adds little value. It is front-loaded with the purpose, but the second part is redundant. It could be more streamlined without losing information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 5 parameters with 0% schema coverage, no output schema, and annotations only covering safety, the description is incomplete. It lacks details on parameter usage, return values (e.g., credential format), error conditions, or operational context. For a credential retrieval tool, this leaves significant gaps for an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description does not explain any parameters (e.g., userId, objectStorageId, credentialId), their meanings, or relationships. It fails to compensate for the schema gap, leaving all 5 parameters without semantic clarification.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool's purpose ('Get S3 compatible object storage credentials'), which is clear but vague. It repeats the same phrase twice without specifying what credentials are obtained (e.g., access keys, temporary tokens) or their scope. It distinguishes from siblings like 'createObjectStorage' or 'regenerateObjectStorageCredentials' by being a retrieval operation, but lacks specificity about the resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance: it mentions accessing via 'S3 compatible tools like `aws` cli', implying usage for integration. However, it lacks explicit when-to-use rules, prerequisites (e.g., needing existing credentials), or alternatives (e.g., vs. 'listObjectStorageCredentials' for listing vs. getting details). No exclusions or comparisons are provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
listDomainsList all domainsCRead-onlyInspect
List all domains - List and filter all your domains
| Name | Required | Description | Default |
|---|---|---|---|
| sld | No | ||
| tld | No | ||
| page | No | ||
| size | No | ||
| status | No | ||
| orderBy | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds minimal behavioral context by mentioning filtering, but it doesn't disclose pagination behavior (implied by 'page' and 'size' parameters), rate limits, or authentication needs. With annotations covering safety, it adds some value but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two short phrases, but it's slightly repetitive ('List all domains - List and filter all your domains'). It front-loads the core action but could be more structured by separating listing from filtering details. Overall, it's efficient with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (8 parameters, 0% schema coverage, no output schema), the description is inadequate. It doesn't explain the filtering parameters, pagination behavior, or return format, leaving significant gaps. With annotations providing only safety hints, more context is needed for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters like 'sld', 'tld', 'status', 'orderBy', etc., are undocumented in the schema. The description only vaguely mentions 'filter all your domains', which doesn't explain what parameters are available or their purposes. It fails to compensate for the low schema coverage, leaving most parameters semantically unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'List all domains - List and filter all your domains', which clarifies the verb (list) and resource (domains). However, it's vague about the filtering capability ('filter all your domains') without specifying what filters are available, and it doesn't distinguish this tool from potential siblings like 'retrieveDomain' or 'retrieveDomainsAuditsList' in the provided list.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description mentions filtering but doesn't specify scenarios or prerequisites, and it doesn't reference sibling tools like 'retrieveDomain' for single-domain retrieval or 'retrieveDomainsAuditsList' for audit logs, leaving the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
listHandlesList all handlesCRead-onlyInspect
List all handles - List and filter all your handles
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| page | No | ||
| size | No | ||
| search | No | ||
| orderBy | No | ||
| lastName | No | ||
| countries | No | ||
| firstName | No | ||
| handleType | No | ||
| x-trace-id | No | ||
| showDefaults | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and non-destructive, which the description doesn't contradict. However, it adds minimal behavioral context beyond listing and filtering—no details on pagination, rate limits, or authentication needs. With annotations covering safety, this is adequate but not rich.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, with no wasted words. However, it's slightly repetitive ('List all handles - List and filter all your handles'), which reduces efficiency but maintains clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 12 parameters with 0% schema coverage, no output schema, and annotations only covering safety, the description is incomplete. It doesn't explain return values, filtering logic, or handle types, leaving significant gaps for a complex list/filter tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions filtering but doesn't explain any of the 12 parameters (e.g., 'name', 'page', 'handleType'). This leaves most semantics undocumented, failing to add meaningful value beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists and filters handles, which is a clear purpose. However, it doesn't specify what 'handles' are (e.g., user accounts, identifiers) or differentiate from sibling tools like 'retrieveHandle' (singular) or 'createHandle', making it somewhat vague.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'retrieveHandle' (for a single handle) or 'createHandle'. The description only repeats the action without context or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
listObjectStorageCredentialsGet list of S3 compatible object storage credentials for user.CRead-onlyInspect
Get list of S3 compatible object storage credentials for user. - Get list of S3 compatible object storage credentials for accessing it via S3 compatible tools like aws cli.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| userId | Yes | ||
| orderBy | No | ||
| regionName | No | ||
| x-trace-id | No | ||
| displayName | No | ||
| x-request-id | Yes | ||
| objectStorageId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, covering safety. The description adds context about S3 compatibility and usage with tools like 'aws' cli, which is useful but doesn't detail behavioral traits like pagination (implied by 'page' and 'size' parameters), rate limits, or authentication requirements beyond the schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences with some repetition ('Get list' appears twice). It's front-loaded but includes redundant phrasing; the second sentence adds value by explaining compatibility, but could be more streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 9 parameters (0% schema coverage), no output schema, and annotations only covering read/destructive hints, the description is insufficient. It lacks details on parameter meanings, return values, pagination behavior, and error handling, making it incomplete for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'for user' and 'S3 compatible', which loosely relates to 'userId' and 'objectStorageId', but doesn't explain the purpose of any parameters (e.g., 'page', 'size', 'orderBy', 'regionName', 'displayName', 'x-request-id', 'x-trace-id'), leaving them undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Get list') and resource ('S3 compatible object storage credentials for user'), making the purpose specific. It distinguishes from some siblings like 'getObjectStorageCredentials' (singular) by emphasizing 'list', but doesn't explicitly contrast with other list/retrieve tools for object storage or credentials.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'getObjectStorageCredentials' (singular) or other list tools. The description mentions accessing via S3 compatible tools like 'aws' cli, which hints at a use case but doesn't provide clear when/when-not instructions or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
orderDomainCreate or transfer a domainDDestructiveInspect
Create or transfer a domain - Create or transfer a domain
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| orderDomainBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a destructive (destructiveHint: true) and non-read-only (readOnlyHint: false) operation, which the description doesn't explicitly state. However, the description doesn't contradict these annotations - 'create or transfer' implies mutation. The description adds no behavioral context beyond what annotations provide (no rate limits, auth requirements, or specific destructive consequences mentioned).
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While technically concise, the description is under-specified rather than efficiently informative. The repetition 'Create or transfer a domain - Create or transfer a domain' wastes space without adding value. It's not front-loaded with useful information - it's essentially empty content masquerading as conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 3 parameters (including complex nested objects), 0% schema coverage, no output schema, and no annotations beyond basic hints, this description is completely inadequate. It provides no information about what the tool actually does, when to use it, what parameters mean, or what to expect as output. The description fails to provide any meaningful context for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the 3 parameters, the description carries full burden for explaining parameter meaning but provides zero information about any parameters. It doesn't mention 'x-request-id', 'orderDomainBody', or any of their sub-properties like 'domain', 'handles', 'authCode', etc. The description fails completely to compensate for the schema's lack of documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Create or transfer a domain - Create or transfer a domain' is essentially a tautology that restates the title with minimal variation. It doesn't provide any meaningful elaboration on what the tool actually does beyond the obvious from the name/title. No specific details about the domain creation or transfer process are mentioned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides absolutely no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, conditions for choosing between creation and transfer, or how it differs from sibling tools like 'cancelDomain', 'confirmDomainTransferOut', or 'validateDomainAvailability'. The agent receives zero usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
patchInstanceUpdate specific instanceCDestructiveInspect
Update specific instance - Update specific instance by instanceId.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| patchInstanceBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description doesn't contradict. However, it adds minimal context beyond this, failing to elaborate on what 'destructive' entails (e.g., irreversible changes) or other behavioral traits like authentication needs or rate limits. With annotations covering safety, the description provides some value but is insufficient for a mutation tool.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is overly concise to the point of under-specification, consisting of a repetitive phrase that doesn't front-load useful information. It wastes space on redundancy ('Update specific instance' repeated) instead of providing actionable details, making it inefficient rather than appropriately sized.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 4 parameters, 0% schema coverage, no output schema, and rich sibling tools, the description is incomplete. It lacks details on what 'instance' refers to, the scope of updates, error conditions, or return values, leaving significant gaps for an AI agent to infer correct usage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but only mentions 'instanceId' without explaining its role or other parameters like 'x-request-id' and 'patchInstanceBody'. It doesn't clarify what fields can be updated in 'patchInstanceBody' (e.g., 'displayName'), leaving parameters largely undocumented and adding little meaning beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update specific instance - Update specific instance by instanceId' is tautological, essentially restating the name and title without adding meaningful detail. It mentions the 'instanceId' parameter but doesn't specify what an 'instance' is or what aspects can be updated, making the purpose vague despite the verb 'Update'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'updateInstance' or other sibling tools. The description lacks context about prerequisites, such as needing an existing instance, and doesn't mention any exclusions or specific scenarios for its use.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
patchPrivateNetworkUpdate a Private Network by idCDestructiveInspect
Update a Private Network by id - Update a Private Network by id in your account.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| privateNetworkId | Yes | ||
| patchPrivateNetworkBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description does not contradict (it implies mutation with 'Update'). However, the description adds no behavioral context beyond this—it does not explain what 'destructive' entails (e.g., irreversible changes, impact on connected instances), rate limits, authentication needs, or error handling. With annotations covering safety, it meets a baseline but lacks enrichment.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, consisting of a single sentence that states the core action. However, it is repetitive ('Update a Private Network by id' appears twice), which slightly reduces efficiency. Overall, it avoids unnecessary verbosity but under-specifies.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, nested objects, destructive hint) and lack of output schema, the description is incomplete. It does not explain what the tool updates, the required parameters, potential side effects, or return values. With annotations providing some safety context but no parameter or output details, it falls short of being adequately informative.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description provides no information about parameters—it does not mention 'x-request-id', 'privateNetworkId', or 'patchPrivateNetworkBody', nor does it explain what fields can be updated (e.g., name, description). This fails to compensate for the schema gap, leaving parameters largely unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is tautological, essentially restating the title ('Update a Private Network by id') with minimal variation. It does not specify what aspects can be updated (e.g., name, description) or how it differs from sibling tools like 'updatePrivateNetwork' (if present, though not listed) or 'createPrivateNetwork'. The verb 'Update' is clear, but the resource scope and differentiation are lacking.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives. It does not mention prerequisites (e.g., needing an existing private network ID), exclusions, or comparisons to similar tools like 'updatePrivateNetwork' (not in siblings) or 'patchInstance'. The description provides no context for usage decisions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
regenerateObjectStorageCredentialsRegenerates secret key of specified user for the S3 compatible object storages.CDestructiveInspect
Regenerates secret key of specified user for the S3 compatible object storages. - Regenerates secret key of specified user for the a specific S3 compatible object storages.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | ||
| x-trace-id | No | ||
| credentialId | Yes | ||
| x-request-id | Yes | ||
| objectStorageId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a destructive (destructiveHint: true) and non-read-only (readOnlyHint: false) operation, which the description doesn't explicitly state. However, 'regenerates' implies mutation and potential disruption, adding some context. No additional behavioral details like rate limits or auth needs are provided, but annotations cover the core safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive, stating the same phrase twice with minor variations ('the S3' vs 'the a specific S3'), which adds no value. It's front-loaded but wastes space on redundancy, reducing clarity and efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 5 parameters, 0% schema coverage, and no output schema, the description is inadequate. It lacks parameter explanations, behavioral details beyond annotations, and output information, making it incomplete for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 5 parameters (4 required), the description fails to explain any parameters, such as 'userId', 'objectStorageId', or 'credentialId'. This leaves critical input semantics undocumented, significantly hindering tool invocation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('regenerates') and resource ('secret key of specified user for the S3 compatible object storages'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'getObjectStorageCredentials' or 'createSecret', which handle related but different operations, preventing a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'getObjectStorageCredentials' for viewing credentials or 'createSecret' for initial creation. It lacks context on prerequisites, exclusions, or typical scenarios, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
reinstallInstanceReinstall specific instanceCDestructiveInspect
Reinstall specific instance - You can reinstall a specific instance with a new image and optionally add ssh keys, a root password or cloud-init.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| reinstallInstanceBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations provide readOnlyHint=false and destructiveHint=true, indicating this is a destructive mutation. The description adds context about what gets reconfigured (image, ssh keys, password, cloud-init), which aligns with the destructive nature. However, it doesn't mention potential downtime, data loss, or authentication requirements beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core action. The second sentence efficiently lists optional configurations. However, the first part repeats the title ('Reinstall specific instance'), which is slightly redundant.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 4 parameters, 0% schema coverage, and no output schema, the description is inadequate. It doesn't explain the return value, error conditions, or important behavioral details like whether the instance must be stopped first. The annotations help but don't replace needed operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description carries full burden for parameter meaning. It mentions 'new image', 'ssh keys', 'root password', and 'cloud-init', which correspond to some parameters, but doesn't cover all 4 parameters (e.g., x-request-id, x-trace-id) or explain the structure of reinstallInstanceBody. The description adds some value but doesn't fully compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('reinstall') and resource ('specific instance'), and specifies what can be configured during reinstallation (new image, ssh keys, root password, cloud-init). However, it doesn't explicitly differentiate from siblings like 'createInstance' or 'patchInstance' beyond the reinstall focus.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'createInstance' (for new instances) or 'patchInstance' (for updates without reinstallation). The description only states what the tool does, not when it's appropriate or what prerequisites might exist.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
removeHandleRemove specific handleCDestructiveInspect
Remove specific handle - Remove specific handle
| Name | Required | Description | Default |
|---|---|---|---|
| handleId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which already convey that this is a destructive write operation. The description doesn't add behavioral details beyond this, such as irreversible effects, authentication needs, or error conditions. However, it doesn't contradict the annotations, so it meets the lower bar with annotations present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is overly concise to the point of being redundant ('Remove specific handle - Remove specific handle'), which doesn't add value. It lacks structure and wastes space with repetition instead of providing useful information, making it inefficient rather than appropriately brief.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given a destructive tool with 3 parameters, 0% schema coverage, no output schema, and no annotations beyond basic hints, the description is incomplete. It doesn't explain what a handle is, the consequences of removal, parameter usage, or expected outcomes, leaving significant gaps for the agent to operate effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description provides no information about the parameters (handleId, x-request-id, x-trace-id), their meanings, or formats. It fails to compensate for the lack of schema documentation, leaving the agent guessing about required inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Remove specific handle - Remove specific handle' is tautological, essentially restating the name/title without adding meaningful clarification. It doesn't specify what a 'handle' represents in this context or what the removal entails, making the purpose vague despite the verb+resource structure.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites, such as needing an existing handle, or differentiate from sibling tools like 'deleteHandle' (not listed but implied by context) or 'updateHandle'. This leaves the agent without context for proper tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rescueRescue a compute instance / resource identified by its idBDestructiveInspect
Rescue a compute instance / resource identified by its id - You can reboot your instance in rescue mode to resolve system issues. Rescue system is Linux based and its booted instead of your regular operating system. The disk containing your operating sytstem, software and your data is already mounted for you to access and repair/modify files. After a reboot your compute instance will boot your operating system. Please note that this is for advanced users.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| rescueBody | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description aligns with by describing a reboot that modifies system state. The description adds valuable context beyond annotations: it explains the rescue mode behavior (Linux-based, mounts OS disk, allows file repair, automatic reboot back to regular OS), which helps the agent understand the operational impact. No contradiction with annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is well-structured and front-loaded with the core purpose. It uses clear sentences to explain rescue mode mechanics and warnings. While slightly verbose, each sentence adds useful information (e.g., Linux-based system, disk mounting, reboot behavior, advanced user note).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and 0% schema description coverage, the description adequately covers the behavioral aspects (what rescue mode does, its effects). However, it lacks critical details: no parameter explanations, no error handling or return values, and minimal guidance on usage versus siblings. This leaves gaps for safe agent invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameter descriptions are missing in the schema. The tool description does not mention any parameters (instanceId, rescueBody with sshKeys, userData, rootPassword, x-request-id, x-trace-id), leaving their purpose and usage completely undocumented. This fails to compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('rescue'), target ('compute instance / resource'), and mechanism ('reboot your instance in rescue mode to resolve system issues'). It explains what rescue mode does (Linux-based system, mounts OS disk for repair, reboots back to regular OS). However, it doesn't explicitly differentiate from siblings like 'restart' or 'reinstallInstance' beyond mentioning it's for 'advanced users'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('to resolve system issues') and notes it's 'for advanced users,' which provides some guidance. However, it doesn't explicitly state when to use this versus alternatives like 'restart' (normal reboot) or 'reinstallInstance' (full OS reinstall), nor does it mention prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resendEmailVerificationResend email verificationCDestructiveInspect
Resend email verification - Resend email verification for a specific user
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | ||
| x-trace-id | No | ||
| redirectUrl | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, suggesting a non-read operation with potential side effects. The description adds that it's for 'resending' verification, implying it triggers an email send, which aligns with the destructive hint. However, it doesn't disclose additional behaviors like rate limits, auth requirements, or what happens if the user is already verified, leaving gaps beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, with no redundant sentences. However, it's slightly repetitive ('Resend email verification' appears twice), and the lack of detail makes it feel under-specified rather than optimally concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (mutative action with 4 parameters, no output schema, and 0% schema coverage), the description is incomplete. It doesn't explain the outcome (e.g., success/failure response, email content), parameter usage, or error conditions. With annotations providing only basic hints, more context is needed for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description mentions 'for a specific user', hinting at the 'userId' parameter, but doesn't explain the other three parameters (x-request-id, x-trace-id, redirectUrl) or their purposes. It adds minimal semantic value beyond the schema's structure.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('Resend email verification') and target ('for a specific user'), which clarifies the purpose. However, it's somewhat vague about what 'email verification' entails (e.g., verification link, code) and doesn't distinguish from siblings like 'resetPassword' or 'generateClientSecret', which could involve similar user-communication actions.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., user must exist, email must be unverified), exclusions, or related tools like 'createUser' or 'updateUser' that might handle verification differently. The description alone offers no usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resetPasswordSend reset password emailCDestructiveInspect
Send reset password email - Send reset password email for a specific user
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | ||
| x-trace-id | No | ||
| redirectUrl | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description doesn't explicitly address. However, 'send reset password email' implies a mutation (changing state by triggering an email) and potential disruption (resetting passwords is destructive), so it aligns with annotations. The description adds minimal behavioral context beyond annotations, such as specifying it's for a 'specific user', but doesn't detail rate limits, auth needs, or email content.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but redundant, repeating 'Send reset password email' twice. It's front-loaded with the core action but wastes words on repetition. While concise, it lacks efficiency as the second phrase adds no new information, failing to earn its place.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive mutation with 4 parameters, no output schema, and 0% schema coverage), the description is inadequate. It doesn't explain what the tool returns, error conditions, or side effects. With annotations covering safety but no output schema, more detail on behavior and parameters is needed for completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'for a specific user', which hints at the 'userId' parameter, but doesn't explain the other three parameters (x-request-id, x-trace-id, redirectUrl). No details are provided on parameter formats, purposes, or usage, leaving significant gaps in understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description restates the tool name and title without adding specificity. It says 'Send reset password email for a specific user', which is essentially a tautology of the title 'Send reset password email'. It doesn't clarify what the email contains, how it's delivered, or what happens after sending. Compared to sibling tools like 'resetPasswordAction' (which might handle the actual password change), it lacks differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., user must exist), when it's appropriate (e.g., for forgotten passwords), or what to do after sending (e.g., follow up with 'resetPasswordAction'). The description offers no context for usage relative to other tools in the list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
resetPasswordActionReset password for a compute instance / resource referenced by an idBDestructiveInspect
Reset password for a compute instance / resource referenced by an id - Reset password for a compute instance / resource referenced by an id. This will reset the current password to the password that you provided in the body of this request.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| resetPasswordActionBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description aligns with by describing a password reset action. The description adds useful context by specifying that it 'will reset the current password to the password that you provided in the body,' clarifying the mutation behavior. However, it doesn't mention potential side effects like instance downtime or authentication requirements beyond what annotations imply.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive, restating the same phrase twice, which adds no value. While it's brief, the redundancy reduces efficiency. The information is front-loaded but could be more streamlined by eliminating the repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with no output schema and 0% parameter coverage in the schema, the description is insufficient. It lacks details on return values, error conditions, authentication needs, or system-specific behaviors (e.g., OS differences), making it incomplete for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the input schema, the description fails to explain any parameters. It mentions providing a password 'in the body' but doesn't detail the required fields (instanceId, x-request-id, resetPasswordActionBody) or their purposes, leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('reset password') and target ('compute instance / resource referenced by an id'), which is specific and actionable. However, it doesn't explicitly differentiate from the sibling tool 'resetPassword' (without 'Action'), leaving some ambiguity about when to use one versus the other.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'resetPassword' or other instance management tools. It lacks context about prerequisites, timing, or exclusions, offering only a basic functional statement without operational guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
restartRestart a compute instance / resource identified by its id.CDestructiveInspect
Restart a compute instance / resource identified by its id. - To restart a compute instance that has been identified by its id, you should perform a restart action on it.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, covering safety and mutation. The description adds minimal context by implying the tool acts on an existing instance, but doesn't detail side effects like downtime, state changes, or error conditions. It doesn't contradict annotations, but offers little beyond them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences that are nearly identical, wasting space without adding value. It's front-loaded but repetitive, lacking efficiency. Each sentence doesn't earn its place, as the second merely rephrases the first.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 3 parameters (0% schema coverage), no output schema, and many siblings, the description is inadequate. It lacks details on parameters, behavior, outcomes, or differentiation, leaving significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions 'id' but doesn't explain parameters like 'instanceId', 'x-request-id', or 'x-trace-id', leaving their purposes and formats undocumented. This fails to add meaning beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('restart') and resource ('compute instance/resource'), but is repetitive and vague about scope. It doesn't distinguish from siblings like 'shutdown', 'start', or 'stop' beyond the basic verb, and the second sentence merely restates the first without adding clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'shutdown', 'start', or 'stop' is provided. The description only repeats the basic action without context, prerequisites, or exclusions, leaving the agent to infer usage from tool names alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveApiPermissionsListList of API permissionsBRead-onlyInspect
List of API permissions - List all available API permissions. This list serves as a reference for specifying roles. As endpoints differ in their possibilities not all actions are available for each endpoint.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| apiName | No | ||
| orderBy | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, indicating a safe read operation. The description adds that this is a 'list' and a 'reference,' but doesn't disclose additional behavioral traits like pagination behavior (implied by page/size parameters), rate limits, or authentication needs. It provides some context but minimal beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences that directly state the tool's function and context. It's front-loaded with the core purpose and avoids unnecessary details, though it could be slightly more structured by separating usage notes.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with 0% schema coverage, no output schema, and annotations only covering safety, the description is incomplete. It doesn't explain parameter usage, return values, or behavioral details like pagination, making it inadequate for a tool with this complexity and lack of structured documentation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden for parameter meaning. It mentions no parameters explicitly, failing to explain the purpose of 'page', 'size', 'apiName', 'orderBy', 'x-trace-id', or 'x-request-id'. This leaves key filtering and pagination semantics undocumented, not compensating for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List all available API permissions' with the context that it 'serves as a reference for specifying roles.' It specifies the verb ('List') and resource ('API permissions'), but doesn't explicitly differentiate from sibling tools like 'retrieveRoleList' or 'retrieveAssignmentList' beyond mentioning its unique role reference purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context by stating it 'serves as a reference for specifying roles' and notes that 'not all actions are available for each endpoint,' which suggests when this reference might be needed. However, it doesn't provide explicit when-to-use guidance, alternatives, or exclusions compared to other list tools in the sibling set.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveAssignmentGet specific assignment for the tagCRead-onlyInspect
Get specific assignment for the tag - Get attributes for a specific tag assignment in your account. For this the resource type and resource id is required.
| Name | Required | Description | Default |
|---|---|---|---|
| tagId | Yes | ||
| resourceId | Yes | ||
| x-trace-id | No | ||
| resourceType | Yes | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, which already inform the agent this is a safe read operation. The description adds context by specifying it retrieves 'attributes' for a 'specific tag assignment,' but does not disclose additional behavioral traits such as error conditions, authentication needs, or rate limits. It does not contradict annotations, so it earns a baseline score for adding some value beyond the structured data.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core purpose, but it is somewhat redundant (repeating 'Get specific assignment for the tag') and could be more structured. It uses two sentences efficiently, but the repetition slightly reduces its conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has no output schema, 0% schema description coverage, and annotations only cover safety, the description is incomplete. It does not explain return values, error handling, or the semantics of undocumented parameters like x-request-id. For a tool with multiple parameters and no structured output information, more detail is needed to be fully helpful.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning no parameters are documented in the schema. The description mentions 'resource type and resource id is required,' which partially explains two of the five parameters (resourceType and resourceId) but ignores tagId, x-request-id, and x-trace-id. It fails to fully compensate for the lack of schema documentation, leaving most parameters unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get attributes for a specific tag assignment in your account.' It specifies the verb ('Get'), resource ('tag assignment'), and scope ('specific'), but does not explicitly differentiate it from sibling tools like 'retrieveAssignmentList' or 'retrieveTag', which reduces clarity in distinguishing exact use cases.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance by stating 'For this the resource type and resource id is required,' which hints at prerequisites but does not explain when to use this tool versus alternatives like 'retrieveAssignmentList' or 'retrieveTag'. No explicit when/when-not scenarios or comparisons to siblings are included.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveAssignmentListList tag assignmentsCRead-onlyInspect
List tag assignments - List and filter all existing assignments for a tag in your account
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| tagId | Yes | ||
| orderBy | No | ||
| x-trace-id | No | ||
| resourceType | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds that it lists 'all existing assignments for a tag in your account', which provides scope context. However, it doesn't mention pagination behavior (implied by page/size parameters), rate limits, or authentication requirements beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core purpose. The repetition of 'List tag assignments' is slightly redundant but not excessive. Every phrase adds some value, though it could be more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 7 parameters (0% documented in schema), no output schema, and no behavioral annotations beyond read-only, the description is insufficient. It doesn't explain parameter meanings, return format, pagination, or error conditions. The annotations help with safety but don't compensate for missing operational context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 7 parameters, the description carries full burden but only mentions 'filter' generically. It doesn't explain what tagId represents, what page/size do, what resourceType filters, or what orderBy controls. The description fails to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('tag assignments'), and specifies filtering capability. It distinguishes from siblings like 'retrieveAssignment' (singular) and 'createAssignment' (write operation), though it doesn't explicitly contrast with other list tools like 'retrieveTagList'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions filtering but provides no guidance on when to use this tool versus alternatives like 'retrieveAssignment' (singular retrieval) or other list tools. No context about prerequisites, typical use cases, or exclusion criteria is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveAssignmentsAuditsListList history about your assignments (audit)CRead-onlyInspect
List history about your assignments (audit) - List and filters the history about your assignments.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| tagId | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| resourceId | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds minimal behavioral context by implying filtering capabilities ('filters the history'), but doesn't elaborate on what filtering entails, rate limits, authentication needs, or output format. With annotations covering safety, the description provides some value but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but inefficient—it repeats 'List history about your assignments' and uses a redundant phrase ('List and filters'). While front-loaded, it wastes space on tautology rather than adding value. It could be more concise by eliminating repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (11 parameters, 0% schema coverage, no output schema), the description is inadequate. It doesn't explain parameter meanings, filtering behavior, pagination (via page/size), or what the audit history entails. With annotations covering safety, it partially helps, but gaps in parameter and behavioral details make it incomplete for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 11 parameters are documented in the schema. The description mentions 'filters' but doesn't specify which parameters correspond to filtering (e.g., tagId, startDate, endDate) or explain their purposes. This fails to compensate for the lack of schema documentation, leaving parameters largely unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description restates the title ('List history about your assignments') and adds a tautological phrase ('List and filters the history about your assignments'), which essentially repeats the name/title without providing specific differentiation from sibling tools. It doesn't clearly distinguish this audit-list tool from other list/retrieve tools in the sibling set.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention any context, prerequisites, or exclusions, nor does it reference sibling tools like 'retrieveAssignment' or 'retrieveAssignmentList' that might be related. This leaves the agent with no usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveCustomImagesStatsList statistics regarding the customer's custom imagesCRead-onlyInspect
List statistics regarding the customer's custom images - List statistics regarding the customer's custom images such as the number of custom images uploaded, used disk space, free available disk space and total available disk space
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds context by specifying the types of statistics returned (count, disk space metrics), which isn't covered by annotations. However, it doesn't disclose other behavioral traits like rate limits, authentication needs, pagination, or error conditions. With annotations covering safety, the description provides moderate additional value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive and inefficient: it repeats 'List statistics regarding the customer's custom images' twice in a single sentence. The second part adds useful detail (specific metrics), but the redundancy wastes space. It could be restructured as a single concise sentence (e.g., 'List statistics for custom images, including count, used disk space, free space, and total space').
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's low complexity (read-only stats retrieval), annotations cover safety, and no output schema exists, the description is moderately complete. It specifies the statistics returned, which helps the agent understand the output. However, it lacks parameter explanations (schema coverage 0%) and doesn't mention behavioral aspects like response format or error handling, leaving gaps for a tool with required parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description mentions no parameters at all, failing to explain the purpose of 'x-request-id' (required) or 'x-trace-id'. However, since there are only 2 parameters (both likely for request tracing/identification), the baseline is 3 as the description doesn't add semantic value but the parameter count is low and straightforward.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List statistics regarding the customer's custom images' with specific metrics listed (number uploaded, disk space usage, free/total disk space). It distinguishes from siblings like 'retrieveImageList' or 'retrieveImage' by focusing on aggregated statistics rather than listing or retrieving individual images. However, it doesn't explicitly contrast with 'retrieveObjectStoragesStats' which handles similar statistics for a different resource.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing custom images to exist), exclusions (e.g., not for instance statistics), or direct comparisons to sibling tools like 'retrieveImageList' (which lists images) or 'retrieveObjectStoragesStats' (which provides stats for object storage). Usage is implied only by the title and description repetition.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveDataCenterListList data centersCRead-onlyInspect
List data centers - List all data centers and their corresponding regions.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| page | No | ||
| size | No | ||
| slug | No | ||
| orderBy | No | ||
| regionName | No | ||
| regionSlug | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds that it lists 'all' data centers, implying completeness, but doesn't mention pagination behavior, rate limits, or authentication requirements. With annotations covering safety, the description provides minimal additional behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core purpose. It avoids redundancy with the title and name. However, it could be slightly more structured by separating scope from output details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 9 parameters (0% schema coverage), no output schema, and no annotations beyond read/destructive hints, the description is inadequate. It doesn't explain parameter usage, return format, pagination, or filtering capabilities, leaving significant gaps for the agent to operate effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 9 parameters are documented in the schema. The description provides no information about parameters, not even hinting at filtering (e.g., by name or region) or pagination (page/size). This leaves the agent with no semantic understanding of what the parameters do.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List all data centers and their corresponding regions.' It specifies the verb ('List') and resource ('data centers'), and adds useful detail about including regions. However, it doesn't explicitly differentiate from sibling tools like 'retrieveInstanceList' or 'retrievePrivateNetworkList' that also list resources, which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention any prerequisites, constraints, or sibling tools that might be relevant for similar queries. The agent must infer usage from the tool name and context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveDnsZoneRetrieve a DNS Zone by zone nameCRead-onlyInspect
Retrieve a DNS Zone by zone name - Get all attributes for a specific DNS Zone
| Name | Required | Description | Default |
|---|---|---|---|
| zoneName | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds minimal behavioral context by implying it fetches 'all attributes', but doesn't detail aspects like error handling, rate limits, or authentication needs. With annotations covering safety, it adds some value but not rich behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two short sentences that directly state the tool's function. It's front-loaded with the core action and avoids unnecessary details. However, the repetition ('Retrieve a DNS Zone by zone name - Get all attributes...') slightly reduces efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 3 parameters with 0% schema coverage, no output schema, and annotations only cover read/destructive hints, the description is incomplete. It doesn't explain parameter usage, return values, error cases, or how it differs from siblings. For a retrieval tool with undocumented inputs, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters lack documentation in the schema. The description mentions 'zoneName' but doesn't explain its format or purpose, and ignores 'x-request-id' and 'x-trace-id'. It adds minimal semantics beyond naming one parameter, failing to compensate for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Retrieve a DNS Zone by zone name - Get all attributes for a specific DNS Zone'. It specifies the verb ('retrieve'), resource ('DNS Zone'), and scope ('by zone name', 'all attributes'), making it easy to understand. However, it doesn't explicitly differentiate from sibling tools like 'retrieveDnsZonesList' (which likely lists multiple zones) or 'retrieveDnsZoneRecordsList' (which focuses on records within a zone), missing full sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'retrieveDnsZonesList' for listing zones or 'retrieveDnsZoneRecordsList' for zone records, nor does it specify prerequisites or exclusions. Usage is implied by the action but lacks explicit context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveDnsZoneRecordsListList a DNS Zone's recordsCRead-onlyInspect
List a DNS Zone's records - Get all the records of a DNS Zone
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| search | No | ||
| orderBy | No | ||
| zoneName | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and non-destructive behavior, which the description aligns with by using 'List' and 'Get'. However, the description adds minimal context beyond annotations—it doesn't mention pagination (implied by 'page' and 'size' parameters), search capabilities, or rate limits. With annotations covering safety, it meets a baseline but lacks rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, with two sentences that directly state the tool's function. However, the second sentence is redundant ('Get all the records of a DNS Zone' repeats the first), slightly reducing efficiency. Overall, it's concise but could be more structured by eliminating repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 7 parameters with 0% schema coverage, no output schema, and annotations only covering safety, the description is incomplete. It doesn't explain parameter usage, return values, pagination, or filtering options, leaving significant gaps for a list operation with multiple optional parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters like 'page', 'size', 'search', and 'orderBy' are undocumented in both schema and description. The description only mentions 'DNS Zone' (hinting at 'zoneName') but doesn't explain any parameters' purposes, formats, or interactions, failing to compensate for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool's purpose ('List a DNS Zone's records') but is somewhat vague and repetitive. It specifies the verb ('List') and resource ('DNS Zone's records'), but doesn't differentiate from sibling tools like 'retrieveDnsZonesList' or 'retrieveDnsZone' beyond the obvious scope of records. The second sentence ('Get all the records of a DNS Zone') essentially restates the first without adding clarity.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention sibling tools like 'retrieveDnsZone' (which might retrieve zone metadata) or 'bulkDeleteDnsZoneRecords' (for deletion), nor does it specify prerequisites such as needing an existing DNS zone. Usage is implied by the name and description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveDnsZonesListList DNS zonesCRead-onlyInspect
List DNS zones - Get a list of all zones
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| orderBy | No | ||
| tenantId | No | ||
| zoneName | No | ||
| customerId | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, which the description doesn't contradict. However, the description adds minimal behavioral context beyond these annotations. It doesn't mention pagination behavior (implied by page/size parameters), rate limits, authentication requirements, or what 'all zones' means in terms of scope (e.g., per tenant). With annotations covering safety, the description provides some basic intent but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two short phrases, but it's under-specified rather than efficiently informative. It front-loads the basic action ('List DNS zones') but wastes the second phrase on repetition ('Get a list of all zones'). While not verbose, it lacks substance, making it less helpful than a more detailed yet concise description would be.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (8 parameters, no output schema, and 0% schema coverage), the description is inadequate. It doesn't explain the return format, pagination, filtering options (via parameters like zoneName or tenantId), or how results are ordered. With annotations providing only safety hints, the description fails to offer a complete picture for effective tool use, especially for a list operation with multiple inputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning no parameters have descriptions in the schema. The tool description provides no information about any of the 8 parameters (e.g., page, size, tenantId, zoneName). It doesn't explain what these parameters do, their formats, or how they affect the listing. This leaves parameters entirely undocumented, failing to compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List DNS zones - Get a list of all zones' is tautological, essentially restating the tool name and title without adding specificity. It doesn't clarify what 'zones' are (DNS zones), nor does it distinguish this tool from sibling tools like retrieveDnsZoneRecordsList or other list operations. The purpose is implied but not explicitly articulated beyond repeating the name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention how it differs from retrieveDnsZoneRecordsList (which lists records within zones) or other list tools like retrieveDnsZone (which retrieves a single zone). The description offers no context on prerequisites, such as authentication or tenant selection, leaving the agent to infer usage from parameters alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveDomainList specific domainDRead-onlyInspect
List specific domain - List specific domain
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, which the description doesn't contradict. However, the description adds no behavioral context beyond these annotations—it doesn't explain what 'list' entails (e.g., returns details, is safe, has no side effects), missing opportunities to clarify scope or constraints.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is overly concise to the point of being unhelpful, repeating 'List specific domain' without adding value. While brief, it's not structured or informative, making it inefficient rather than succinct.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (3 parameters, no output schema) and lack of schema descriptions, the description is incomplete. It doesn't explain what the tool returns, how parameters are used, or behavioral nuances, leaving significant gaps for the agent to infer.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 3 parameters (domain, x-request-id, x-trace-id), the description provides no information about what these parameters mean, their purposes, or how they affect the operation. It fails to compensate for the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List specific domain - List specific domain' is a tautology that merely restates the name and title without adding meaningful information. It doesn't specify what 'list' means in this context (retrieve details? show records?) or what 'specific domain' refers to, making the purpose vague and redundant.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, context, or differentiation from sibling tools like 'listDomains' or 'retrieveDomainsAuditsList', leaving the agent with no usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveDomainsAuditsListList history about your Domains (audit)CRead-onlyInspect
List history about your Domains (audit) - List and filters the history about your Domains.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| domain | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows it's a safe read operation. The description adds that it 'lists and filters,' implying querying capabilities, but doesn't disclose behavioral traits like pagination handling, rate limits, authentication needs, or what 'history' includes. With annotations covering safety, it adds minimal context beyond them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, stating the core function in a single sentence. However, it's slightly repetitive ('List history about your Domains' appears twice) and could be more efficient by avoiding redundancy while maintaining clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (10 parameters, 0% schema coverage, no output schema) and annotations only covering safety, the description is incomplete. It lacks details on parameter usage, return values, error handling, or operational context, making it inadequate for effective tool invocation without additional guesswork.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description only mentions 'list and filters' generically, without explaining any of the 10 parameters (e.g., 'page', 'size', 'domain', date ranges). It fails to compensate for the low coverage, leaving parameter meanings unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists and filters history about Domains, which clarifies the verb (list/filter) and resource (Domains history/audit). However, it's somewhat vague about what 'history' entails (e.g., audit logs, changes) and doesn't explicitly distinguish it from similar tools like 'retrieveDomain' or 'listDomains', though the 'audit' context in the title helps.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description mentions filtering but doesn't specify scenarios, prerequisites, or exclusions. Sibling tools include other audit-related tools (e.g., 'retrieveAssignmentsAuditsList'), but no comparison is made, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveHandleGet specific handleCRead-onlyInspect
Get specific handle - Get specific handle
| Name | Required | Description | Default |
|---|---|---|---|
| handleId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, which already inform the agent this is a safe read operation. The description adds no behavioral context beyond this, such as rate limits, authentication needs, or what happens if the handle doesn't exist. However, it doesn't contradict the annotations, so it meets the lower bar set by having annotations, but adds minimal extra value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description 'Get specific handle - Get specific handle' is repetitive and wastes space by restating the same phrase. It lacks structure, such as front-loading key information or breaking down usage. While brief, it's under-specified rather than efficiently concise, as it doesn't convey necessary details in its limited wording.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a retrieval tool with 3 parameters, 0% schema coverage, no output schema, and annotations only covering read/destructive hints), the description is incomplete. It doesn't explain what a handle is, what data is returned, error conditions, or how to use the parameters. With annotations providing basic safety info but no output schema, the description should do more to guide the agent effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the three parameters (handleId, x-request-id, x-trace-id) are documented in the schema. The description provides no information about these parameters, failing to compensate for the lack of schema documentation. It doesn't explain what a handleId is, the purpose of the request/trace IDs, or their formats, leaving the agent with insufficient guidance.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get specific handle - Get specific handle' is a tautology that merely restates the name and title without adding meaningful information. It doesn't explain what a 'handle' is or what this tool actually does beyond the obvious implication of retrieval. While it distinguishes from sibling tools like 'createHandle' and 'removeHandle' by being a 'get' operation, it lacks specificity about the resource or context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. Given siblings like 'listHandles' (for listing multiple handles) and 'retrieveHandle' (for getting a specific one), the description fails to clarify the distinction or when one should be preferred over the other. There's no mention of prerequisites, such as needing a handleId, or contextual usage tips.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveHandlesAuditsListList history about your handles (audit)CRead-onlyInspect
List history about your handles (audit) - List and filters the history about your handles.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| endDate | No | ||
| orderBy | No | ||
| handleId | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows it's a safe read operation. The description adds minimal behavioral context by implying filtering capabilities, but doesn't detail aspects like pagination behavior, rate limits, or authentication needs. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, but repetitive ('List history about your handles (audit) - List and filters...'). It could be more efficient by avoiding redundancy while still conveying core purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 10 parameters, 0% schema coverage, no output schema, and no annotations beyond read/destructive hints, the description is inadequate. It lacks details on parameter usage, expected outputs, error handling, or how it fits into the broader audit system, making it incomplete for effective agent use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 10 parameters, the description fails to compensate by explaining any parameters. It mentions filtering generically but doesn't clarify what parameters like 'orderBy', 'handleId', or 'x-request-id' do, leaving semantics unclear beyond the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists and filters history about handles, which clarifies the action and resource. However, it's somewhat vague ('history about your handles') and doesn't explicitly differentiate from sibling tools like retrieveHandle or listHandles, which might handle current handle data versus audit history.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context (e.g., for auditing changes), or compare it to other audit-related tools like retrieveAssignmentsAuditsList or retrieveDomainsAuditsList, leaving the agent without usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveImageGet details about a specific image by its idBRead-onlyInspect
Get details about a specific image by its id - Get details about a specific image. This could be either a standard or custom image. In case of an custom image you can also check the download status
| Name | Required | Description | Default |
|---|---|---|---|
| imageId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds useful context beyond annotations: it clarifies the tool works for both standard and custom images and mentions the ability to check download status for custom images. However, it doesn't describe rate limits, authentication needs, or response format details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is appropriately concise with two sentences that add value. The first sentence states the core purpose, and the second adds important context about image types and download status. There's no wasted verbiage, though it could be slightly more structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, 2 required), lack of output schema, and 0% schema description coverage, the description is incomplete. It covers the purpose and some behavioral context but leaves key parameters undocumented and doesn't explain return values. The annotations help by declaring it read-only and non-destructive, but more detail is needed for full completeness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 3 parameters with 0% description coverage, meaning none of the parameters have descriptions in the schema. The tool description only mentions 'imageId' and provides no information about 'x-request-id' or 'x-trace-id'. This leaves two required parameters undocumented, failing to compensate for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get details about a specific image by its id' and elaborates that it works for both standard and custom images. It distinguishes itself from sibling tools like 'retrieveImageList' (which lists images) by focusing on a single image, though it doesn't explicitly name alternatives. The verb 'Get details' is specific enough.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides minimal guidance on when to use this tool. It mentions 'This could be either a standard or custom image' and notes that for custom images 'you can also check the download status', but it doesn't explicitly state when to choose this over alternatives like 'retrieveImageList' or 'retrieveCustomImagesStats'. No exclusions or prerequisites are mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveImageAuditsListList history about your DNS Zones (audit)CRead-onlyInspect
List history about your DNS Zones (audit) - List and filters the history about your DNS Zones .
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| page | No | ||
| size | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds that it 'filters' history, implying parameter-based querying, but doesn't disclose behavioral traits like pagination (implied by page/size parameters), rate limits, authentication needs, or output format. With annotations covering safety, it adds minimal context beyond them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but repetitive ('List history... - List and filters...'). It front-loads the purpose but wastes words on redundancy. It could be more structured by separating purpose from filtering details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 10 parameters with 0% schema coverage, no output schema, and annotations only covering read/destructive hints, the description is incomplete. It doesn't explain what the tool returns, how filtering works, or parameter usage, leaving significant gaps for a list/filter tool with many inputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description only mentions 'filters' generically, without explaining any of the 10 parameters (e.g., name, page, size, dates, orderBy). It fails to compensate for the lack of schema descriptions, leaving parameter meanings unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description restates the title with slight rephrasing ('List history about your DNS Zones (audit) - List and filters the history about your DNS Zones'), which is tautological. It specifies the verb 'list' and resource 'DNS Zones history/audit' but lacks specificity about what constitutes 'history' or 'audit' data. It doesn't distinguish from sibling tools like retrieveDnsZonesList or retrieveRecordAuditsList.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives is provided. The description mentions filtering but doesn't specify typical use cases, prerequisites, or exclusions. Sibling tools include other audit lists (e.g., retrieveAssignmentsAuditsList, retrieveDomainsAuditsList), but no differentiation is made.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveImageAuditsList1List history about your custom images (audit)CRead-onlyInspect
List history about your custom images (audit) - List and filters the history about your custom images.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| endDate | No | ||
| imageId | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds minimal behavioral context by mentioning filtering, but doesn't describe pagination behavior, rate limits, authentication needs, or what specific audit data is returned. With annotations covering safety, a 3 is appropriate—the description adds some value but not rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive and poorly structured. It repeats 'List history about your custom images (audit)' and uses a dash to add a slightly rephrased version. This wastes space without adding clarity. It's not appropriately front-loaded with essential information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 10 parameters (0% schema coverage), no output schema, and no annotations beyond read/destructive hints, the description is inadequate. It doesn't explain what the tool returns, how to interpret audit data, or provide any parameter guidance. Given the complexity and lack of structured documentation, it should do much more.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 10 parameters have descriptions in the schema. The description mentions 'filters' but doesn't explain any parameters, their purposes, or how to use them (e.g., what imageId refers to, what orderBy expects, date formats). It fails to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is tautological, essentially restating the tool name and title. It says 'List history about your custom images (audit) - List and filters the history about your custom images,' which provides no additional clarity beyond what's already in the name/title. It doesn't specify what 'history' means or what kind of audit information is returned.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. There are several sibling audit tools (e.g., retrieveInstancesAuditsList, retrieveDomainsAuditsList), but the description doesn't differentiate this tool from them or explain when this specific audit tool is appropriate.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveImageListList available standard and custom imagesBRead-onlyInspect
List available standard and custom images - List and filter all available standard images provided by Contabo and your uploaded custom images.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| page | No | ||
| size | No | ||
| search | No | ||
| orderBy | No | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| standardImage | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds minimal behavioral context by mentioning filtering capabilities, but doesn't disclose pagination behavior, rate limits, or authentication needs. With annotations covering safety, it meets the lower bar but lacks rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose and uses two concise sentences. However, the first sentence is somewhat redundant ('List available standard and custom images - List and filter...'), slightly reducing efficiency. Overall, it's appropriately sized with minimal waste.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (8 parameters, no output schema) and annotations covering safety, the description is minimally adequate. It explains the resource scope but lacks details on parameter usage, output format, and behavioral traits like pagination. It meets basic needs but has clear gaps in context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description only mentions filtering by name and type (standard vs. custom), which partially explains the 'name' and 'standardImage' parameters but leaves 6 other parameters (page, size, search, orderBy, x-trace-id, x-request-id) undocumented. It doesn't compensate adequately for the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('List and filter') and resource ('available standard images provided by Contabo and your uploaded custom images'), making the purpose specific and understandable. However, it doesn't explicitly distinguish this tool from other list/retrieve tools in the sibling set (like retrieveInstanceList, retrieveDnsZonesList, etc.), which would require a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, context for filtering, or differentiate it from other image-related tools (like retrieveImage or retrieveCustomImagesStats) in the sibling list, leaving the agent without usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveInstanceGet specific instance by idCRead-onlyInspect
Get specific instance by id - Get attributes values to a specific instance on your account.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and non-destructive behavior, which the description doesn't contradict. The description adds minimal context by specifying it retrieves 'attribute values' for 'your account,' hinting at ownership or access scope. However, it lacks details on rate limits, authentication needs, error responses, or data format, which would be valuable given the absence of an output schema. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core action, but it includes redundant phrasing ('Get specific instance by id' is repeated from the title) and could be more structured. Sentences are somewhat awkward ('Get attributes values to a specific instance'), reducing clarity. It avoids unnecessary verbosity but doesn't maximize information density efficiently.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (a read operation with 3 parameters, no output schema, and 0% schema description coverage), the description is incomplete. It covers the basic purpose but lacks details on parameter meanings, return values, error handling, or behavioral constraints beyond annotations. For a tool that likely returns structured instance data, more context is needed to guide effective use by an agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'by id,' which aligns with the 'instanceId' parameter, but doesn't explain the other two parameters ('x-request-id' and 'x-trace-id'), their purposes (e.g., request tracking), formats, or requirements. This leaves key parameters semantically unclear, failing to adequately address the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves a specific instance by ID and gets its attribute values, which clarifies the verb (get/retrieve) and resource (instance). However, it's somewhat vague about what 'attribute values' entails and doesn't explicitly differentiate from siblings like 'retrieveInstancesList' (which likely lists multiple instances) or 'retrieveInstanceAuditsList' (which might retrieve audit logs). The repetition of 'Get specific instance by id' in the title and description adds minor redundancy.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives. The description implies usage for fetching a single instance's attributes, but it doesn't mention prerequisites (e.g., needing a valid instance ID), contrast with list tools (e.g., 'retrieveInstancesList' for multiple instances), or specify error conditions. This leaves the agent to infer usage from context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveInstancesActionsAuditsListList history about your actions (audit) triggered via the APICRead-onlyInspect
List history about your actions (audit) triggered via the API - List and filters the history about your actions your triggered via the API.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| instanceId | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds minimal behavioral context beyond this—it mentions filtering but doesn't specify default behaviors (e.g., pagination, sorting, or rate limits). With annotations covering safety, the description adds some value but lacks depth on operational traits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise but repetitive ('List history about your actions (audit) triggered via the API - List and filters the history...'). It front-loads the purpose but wastes words on redundancy. A single clear sentence would suffice, making it moderately efficient but not optimally structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (10 parameters, 0% schema coverage, no output schema), the description is inadequate. It doesn't explain what the tool returns, how filtering works, or any constraints (e.g., date formats, pagination defaults). With annotations providing only safety info, the description leaves significant gaps for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 10 parameters have descriptions in the schema. The description only vaguely mentions 'filters' without explaining any parameters (e.g., page, size, dates, IDs). It fails to compensate for the schema's lack of documentation, leaving most parameters semantically unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists and filters history about API-triggered actions (audit), which clarifies the verb ('list') and resource ('history/audit'). However, it's somewhat vague about the exact scope ('your actions triggered via the API') and doesn't clearly differentiate from sibling audit tools like retrieveAssignmentsAuditsList or retrieveDomainsAuditsList, which likely serve similar purposes for different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, typical use cases, or how it compares to other audit-related tools in the sibling list (e.g., retrieveInstancesAuditsList might be for instance-specific audits, but this isn't clarified). Usage is implied only by the tool name and description.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveInstancesAuditsListList history about your instances (audit)CRead-onlyInspect
List history about your instances (audit) - List and filters the history about your instances.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| instanceId | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows it's a safe read operation. The description adds that it 'lists and filters,' implying querying capabilities, but doesn't disclose behavioral traits like pagination (implied by page/size params), rate limits, authentication needs, or what 'audit' specifically includes. It doesn't contradict annotations, but adds minimal context beyond them.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, stating the core purpose in a single sentence. However, it's slightly redundant ('List history about your instances (audit) - List and filters...'), and could be more structured by separating purpose from filtering details. Overall, it's efficient but not perfectly polished.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (10 parameters, 0% schema coverage, no output schema) and annotations covering only safety, the description is incomplete. It lacks details on parameter usage, return values, error handling, and how it differs from sibling audit tools. For a tool with many undocumented inputs, more context is needed to guide effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description only mentions 'list and filters' generically, without explaining any of the 10 parameters (e.g., page, size, dates, instanceId). It fails to compensate for the low coverage, leaving most parameter meanings unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists and filters history about instances, which clarifies the verb (list/filter) and resource (instance history/audit). However, it's somewhat vague about what 'history' entails (e.g., audit logs, changes) and doesn't differentiate from sibling tools like retrieveInstancesActionsAuditsList or retrieveInstancesList, which might overlap in purpose.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description mentions filtering but doesn't specify scenarios, prerequisites, or exclusions. Sibling tools include various audit and list functions, but no comparison or context is given to help an agent choose appropriately.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveInstancesListList instancesCRead-onlyInspect
List instances - List and filter all instances in your account
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| page | No | ||
| size | No | ||
| region | No | ||
| search | No | ||
| status | No | ||
| orderBy | No | ||
| addOnIds | No | ||
| ipConfig | No | ||
| tenantId | No | ||
| customerId | No | ||
| dataCenter | No | ||
| instanceId | No | ||
| productIds | No | ||
| x-trace-id | No | ||
| displayName | No | ||
| instanceIds | No | ||
| productTypes | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds that it lists and filters instances, which provides basic context beyond the annotations. However, it doesn't disclose important behavioral traits like pagination behavior (implied by 'page' and 'size' parameters), rate limits, authentication needs, or what 'all instances in your account' entails. The description doesn't contradict annotations, but adds minimal value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence: 'List instances - List and filter all instances in your account'. It's front-loaded with the core purpose. However, the repetition of 'List instances' from the title adds minor redundancy, slightly reducing conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (19 parameters, 0% schema coverage, no output schema) and the annotations covering only safety, the description is inadequate. It doesn't explain the filtering capabilities, pagination, return format, or how to interpret parameters like 'status' with its enum. For a tool with many undocumented parameters, more detail is needed to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 19 parameters have descriptions in the schema. The description only mentions 'filter' generically, without explaining what parameters are available (e.g., name, status, region) or their semantics. This fails to compensate for the poor schema coverage, leaving most parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'List and filter all instances in your account', which clearly indicates the verb (list/filter) and resource (instances). However, it doesn't distinguish this tool from other list/filter tools like 'retrieveInstance' or 'retrieveInstancesAuditsList' among the many siblings. The purpose is clear but lacks sibling differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. With many sibling tools that also retrieve instance-related data (e.g., 'retrieveInstance', 'retrieveInstancesAuditsList'), there's no indication of when this list/filter tool is appropriate versus when to use other retrieval methods. No explicit or implied usage context is provided.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveObjectStorageGet specific object storage by its idCRead-onlyInspect
Get specific object storage by its id - Get data for a specific object storage on your account.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| objectStorageId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and non-destructive behavior, which the description does not contradict. However, the description adds little beyond this—it does not specify what 'data' is returned, any rate limits, authentication needs, or error conditions. With annotations covering safety, the description provides minimal additional behavioral context, earning a baseline score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but redundant, repeating the title and adding a tautological phrase. It uses two sentences that could be condensed into one more informative statement. While not overly verbose, it lacks efficiency and does not front-load critical information effectively.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and low parameter coverage, the description is incomplete. It does not explain what data is returned, how errors are handled, or any dependencies. For a retrieval tool with multiple parameters, this leaves significant gaps in understanding, making it inadequate for full contextual awareness.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has three parameters with 0% description coverage, and the tool description does not explain any of them. It mentions 'by its id' but does not clarify the 'objectStorageId' parameter or the purpose of 'x-request-id' and 'x-trace-id'. Given the low schema coverage, the description fails to compensate, leaving parameters largely undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves a specific object storage by its ID, which is clear but basic. It repeats the title verbatim ('Get specific object storage by its id') and adds minimal context ('Get data for a specific object storage on your account'), making it somewhat vague. It does not differentiate from sibling tools like 'retrieveObjectStorageList' or 'retrieveObjectStoragesStats', which reduces its effectiveness.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It does not mention prerequisites, such as needing an object storage ID, or compare it to related tools like 'retrieveObjectStorageList' for listing all object storages. This lack of context makes it harder for an AI agent to select the correct tool in practice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveObjectStorageAuditsListList history about your object storages (audit)CRead-onlyInspect
List history about your object storages (audit) - List and filters the history about your object storages.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| objectStorageId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows it's a safe read operation. The description adds no behavioral details beyond this, such as pagination handling, rate limits, or authentication needs. Since annotations cover safety, the description meets a baseline but lacks added value like explaining what 'audit' includes or response format.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but repetitive, stating the same idea twice ('List history about your object storages (audit) - List and filters the history about your object storages'). It's front-loaded but wastes words on redundancy instead of adding clarity. A single, more informative sentence would improve efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (10 parameters, 0% schema coverage, no output schema), the description is inadequate. It doesn't explain the audit data structure, how to interpret results, or handle the many filtering parameters. With annotations providing only safety info, more context on usage and output is needed for effective agent operation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters like 'page', 'size', 'orderBy', and 'objectStorageId' are undocumented in the schema. The description mentions 'filters' but doesn't explain any parameters, their purposes, or how they interact (e.g., date ranges, filtering by request ID). This fails to compensate for the low schema coverage, leaving most parameters ambiguous.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'List history about your object storages (audit)' and repeats 'List and filters the history about your object storages', which clarifies it retrieves audit logs for object storages. However, it's vague about what 'history' entails (e.g., changes, access logs) and doesn't distinguish it from sibling audit tools like 'retrieveObjectStoragesStats' or 'retrieveInstancesAuditsList', missing specificity in scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description only repeats the purpose without mentioning prerequisites, such as needing an object storage ID, or contrasting it with other audit tools like 'retrieveObjectStoragesStats' for non-audit data. This leaves the agent without context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveObjectStorageListList all your object storagesCRead-onlyInspect
List all your object storages - List and filter all object storages in your account
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| region | No | ||
| orderBy | No | ||
| s3TenantId | No | ||
| x-trace-id | No | ||
| displayName | No | ||
| x-request-id | Yes | ||
| dataCenterName | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds that it's a list operation with filtering capability, which provides useful context beyond the annotations. However, it doesn't mention pagination behavior, rate limits, or authentication requirements that would be helpful for a tool with 9 parameters.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two clear phrases that convey the core functionality. However, the second phrase 'List and filter all object storages in your account' is somewhat redundant with the first, and the structure could be improved by front-loading more critical information about the filtering parameters.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 9 parameters (one required), 0% schema description coverage, no output schema, and no annotations beyond read/destructive hints, the description is inadequate. It doesn't explain the purpose of the parameters, the expected response format, pagination behavior, or how this tool differs from similar retrieval tools in the sibling list.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 9 parameters, the description carries the full burden of explaining parameter meaning. It only mentions 'filter all object storages' without explaining what any of the 9 parameters (page, size, region, orderBy, s3TenantId, x-trace-id, displayName, x-request-id, dataCenterName) actually do or how they affect filtering.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List all your object storages' and adds filtering capability. It specifies the resource ('object storages') and scope ('in your account'), but doesn't explicitly differentiate from sibling tools like 'retrieveObjectStorage' or 'retrieveObjectStoragesStats'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'retrieveObjectStorage' (for single storage) or 'retrieveObjectStoragesStats' (for statistics). It mentions filtering but doesn't specify when filtering is appropriate or what distinguishes this list operation from other retrieval tools.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveObjectStoragesStatsList usage statistics about the specified object storageCRead-onlyInspect
List usage statistics about the specified object storage - List usage statistics about the specified object storage such as the number of objects uploaded / created, used object storage space. Please note that the usage statistics are updated regularly and are not live usage statistics.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| objectStorageId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and non-destructive behavior, which the description aligns with by describing a listing operation. The description adds valuable context by noting that statistics are 'updated regularly and are not live,' clarifying data freshness, which goes beyond the annotations. No contradictions exist.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded but includes redundant phrasing, repeating the title. The second sentence adds useful information about data freshness, but the repetition reduces efficiency. It is appropriately sized but could be more streamlined.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the lack of output schema and low schema coverage, the description provides basic purpose and behavioral context (non-live data). However, it misses details on parameter meanings, return format, or error handling, making it incomplete for a tool with undocumented inputs and no output schema.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 3 parameters (2 required), the description does not explain any parameters. It mentions 'specified object storage' but does not detail what 'objectStorageId' represents or the purpose of 'x-request-id' and 'x-trace-id', failing to compensate for the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists usage statistics about object storage, which clarifies its purpose. However, it repeats the title verbatim ('List usage statistics about the specified object storage') and lacks differentiation from siblings like 'retrieveObjectStorage' or 'retrieveObjectStorageList', making it vague about what distinguishes this specific retrieval operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives. The description mentions that statistics are 'not live' and updated regularly, which hints at context but does not specify alternatives or exclusions, such as whether it should be used over other retrieval tools for real-time data.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrievePrivateNetworkGet specific Private Network by idCRead-onlyInspect
Get specific Private Network by id - Get attributes values to a specific Private Network on your account.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| privateNetworkId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, indicating this is a safe read operation. The description adds minimal behavioral context beyond this—it implies the tool fetches data for a specific network by ID, but doesn't disclose rate limits, authentication needs, error conditions, or response format. With annotations covering safety, it meets baseline expectations but lacks enriching details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but somewhat redundant: 'Get specific Private Network by id - Get attributes values to a specific Private Network on your account.' The second phrase repeats the intent without adding new information. It's front-loaded with the core purpose, but the repetition slightly reduces efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (3 parameters, 0% schema coverage, no output schema, and annotations only covering safety), the description is insufficient. It doesn't explain parameter meanings, return values, error handling, or usage context. For a retrieval tool with undocumented inputs and outputs, more detail is needed to guide the agent effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description mentions 'by id' which hints at the privateNetworkId parameter, but doesn't explain the x-request-id or x-trace-id parameters, their purposes, or formats. It fails to compensate for the schema's lack of descriptions, leaving key parameters ambiguous.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Get specific Private Network by id' and 'Get attributes values to a specific Private Network on your account.' It specifies the verb (get/retrieve) and resource (Private Network), but doesn't explicitly differentiate from sibling tools like retrievePrivateNetworkList or retrievePrivateNetworkAuditsList, which would require a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like retrievePrivateNetworkList (for listing multiple networks) or retrievePrivateNetworkAuditsList (for audit logs), nor does it specify prerequisites or exclusions. The agent must infer usage from the name and context alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrievePrivateNetworkAuditsListList history about your Private Networks (audit)CRead-onlyInspect
List history about your Private Networks (audit) - List and filters the history about your Private Networks.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| privateNetworkId | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds that it 'lists and filters,' implying query capabilities, but doesn't disclose behavioral details like pagination handling, rate limits, or authentication needs beyond what annotations provide. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, but it's repetitive ('List history about your Private Networks (audit) - List and filters the history about your Private Networks'), wasting words. It could be more concise by eliminating redundancy while retaining clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (10 parameters, 0% schema coverage, no output schema), the description is inadequate. It doesn't explain return values, error handling, or parameter usage, leaving significant gaps for the agent to operate effectively with this audit listing tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 10 parameters, the description fails to add meaningful semantics beyond the schema. It mentions filtering but doesn't explain what parameters like 'orderBy', 'changedBy', or 'privateNetworkId' do, leaving the agent to infer from schema types alone, which is insufficient given the low coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists and filters audit history for Private Networks, which clarifies the verb (list/filter) and resource (Private Networks audit history). However, it's somewhat vague ('history about your Private Networks') and doesn't clearly differentiate from sibling audit tools like retrievePrivateNetworkList or retrieveInstancesAuditsList, which could cause confusion about scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description mentions filtering but doesn't specify when filtering is appropriate or how this tool differs from other audit or list tools in the sibling set, leaving the agent without clear usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrievePrivateNetworkListList Private NetworksCRead-onlyInspect
List Private Networks - List and filter all Private Networks in your account
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| page | No | ||
| size | No | ||
| region | No | ||
| orderBy | No | ||
| dataCenter | No | ||
| x-trace-id | No | ||
| instanceIds | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and non-destructive behavior, which the description doesn't contradict. The description adds minimal behavioral context beyond annotations—it mentions filtering but doesn't specify what filtering is available, pagination behavior, or any rate limits. With annotations covering safety, this gets a baseline score for adding some value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, stating the core function in a single sentence. There's no wasted text, though it could be more informative. It's appropriately sized for a basic tool description.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 9 parameters with 0% schema coverage, no output schema, and annotations only covering read-only/non-destructive aspects, the description is incomplete. It doesn't explain parameter usage, return values, or behavioral details like pagination or error handling, making it inadequate for a tool with this complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden for explaining parameters. It mentions filtering but doesn't detail any of the 9 parameters (e.g., name, page, size, region, orderBy, dataCenter, instanceIds, x-request-id, x-trace-id). This leaves parameters undocumented and doesn't compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists and filters private networks, which is a clear purpose. However, it's somewhat vague about the filtering capabilities and doesn't distinguish this tool from other list/retrieve tools in the sibling set (like retrieveInstancesList, retrieveDnsZonesList, etc.) beyond specifying the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, context for filtering, or how this differs from other retrieval tools in the sibling list. It simply restates the basic function without usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrievePtrRecordRetrieve a PTR Record by ip addressBRead-onlyInspect
Retrieve a PTR Record by ip address - Get all attributes for a specific PTR Record
| Name | Required | Description | Default |
|---|---|---|---|
| ipAddress | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and non-destructive behavior, which the description doesn't contradict. The description adds minimal context by implying it fetches all attributes, but doesn't detail rate limits, authentication needs, or error handling. With annotations covering safety, this is adequate but lacks rich behavioral insights.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, stating the core action in the first clause. Both sentences are relevant, with no wasted words, though it could be slightly more structured for clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's moderate complexity (3 parameters, no output schema), the description is minimally complete. It covers the basic purpose but lacks details on parameter usage, error cases, or output format. Annotations help, but more context would improve usability.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It only mentions 'ip address', which maps to one parameter, but ignores 'x-request-id' and 'x-trace-id'. This leaves two parameters unexplained, failing to adequately supplement the schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: retrieving a PTR record by IP address and getting all its attributes. It specifies the verb ('retrieve'), resource ('PTR Record'), and key identifier ('by ip address'). However, it doesn't explicitly differentiate from sibling tools like 'retrievePtrRecordsList' or 'retrieveRecordAuditsList', which prevents a perfect score.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'retrievePtrRecordsList' for listing multiple records or 'retrieveRecordAuditsList' for audit data, nor does it specify prerequisites or exclusions for usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrievePtrRecordsListList PTR recordsBRead-onlyInspect
List PTR records - Get a list of all PTR records, either customer or a list of IPs is required
| Name | Required | Description | Default |
|---|---|---|---|
| ips | No | ||
| page | No | ||
| size | No | ||
| search | No | ||
| orderBy | No | ||
| tenantId | No | ||
| customerId | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds that it 'Get a list of all PTR records', implying it returns multiple items, but doesn't disclose pagination behavior (though 'page' and 'size' parameters hint at it), rate limits, or authentication needs. With annotations covering safety, the description adds modest context but lacks rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two short sentences that convey the core purpose and a key parameter requirement. It's front-loaded with the main action. However, the second sentence could be clearer (e.g., 'Requires either customerId or ips parameter').
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 9 parameters with 0% schema coverage and no output schema, the description is incomplete. It hints at required parameters but leaves most undocumented. Annotations cover safety, but behavioral aspects like pagination or response format are missing. For a list tool with many parameters, this is minimally adequate but has significant gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description mentions 'customer or a list of IPs is required', which clarifies that 'customerId' or 'ips' parameters are needed, but it doesn't explain the other 7 parameters (e.g., 'page', 'size', 'search', 'orderBy', 'tenantId', 'x-trace-id', 'x-request-id'). This partial coverage is insufficient given the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'List PTR records' and 'Get a list of all PTR records'. It specifies the resource (PTR records) and action (list/get). However, it doesn't explicitly differentiate from sibling tools like 'retrievePtrRecord' (singular) or 'deletePtrRecord', though the plural 'list' implies a collection operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides some usage context: 'either customer or a list of IPs is required', indicating at least one of these parameters must be provided. However, it doesn't explicitly state when to use this tool versus alternatives like 'retrievePtrRecord' (singular retrieval) or 'createPtrRecord', nor does it mention prerequisites or exclusions beyond the parameter requirement.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveRecordAuditsListList history about your DNS Records (audit)CRead-onlyInspect
List history about your DNS Records (audit) - List and filter the history of changes made to your DNS Records.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| endDate | No | ||
| orderBy | No | ||
| recordId | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and non-destructive behavior, which the description aligns with by implying a listing function. The description adds context about filtering historical changes, but doesn't detail rate limits, authentication needs, or pagination behavior beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but somewhat repetitive, restating 'List history about your DNS Records' from the title. It uses two sentences that could be more efficiently combined. While not verbose, it doesn't maximize information density.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 10 parameters, 0% schema coverage, no output schema, and no annotations beyond read-only/destructive hints, the description is insufficient. It doesn't explain return values, error conditions, or parameter interactions, leaving significant gaps for agent understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 10 parameters, the description fails to compensate. It mentions filtering but doesn't explain parameters like page, size, orderBy, or the required x-request-id. No semantic context is added beyond the basic concept of filtering.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists and filters history of changes to DNS Records, which is clear but somewhat vague. It repeats 'List history about your DNS Records' from the title, making it partially tautological. It distinguishes from non-audit siblings but not clearly from other audit tools like retrieveDnsZoneRecordsList.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives is provided. The description mentions filtering but doesn't specify scenarios or prerequisites. Sibling tools include other audit and list functions, but no comparison or context is given.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveRoleGet specific role by idCRead-onlyInspect
Get specific role by id - Get attributes of specific role.
| Name | Required | Description | Default |
|---|---|---|---|
| roleId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description doesn't add behavioral details beyond this, such as error handling or response format, but it doesn't contradict the annotations either. With annotations covering safety, the description adds minimal value, warranting a baseline score.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but repetitive, with 'Get specific role by id' appearing twice. It's front-loaded but wastes words on redundancy. While not overly verbose, it could be more efficient by eliminating the tautology and adding useful context instead.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a retrieval tool with 3 parameters (2 required) and no output schema, the description is incomplete. It lacks details on parameter usage, error cases, and return values. With annotations providing only safety hints, more context is needed for effective tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters like 'roleId', 'x-request-id', and 'x-trace-id' are undocumented in the schema. The description mentions 'by id' but doesn't explain parameter meanings, formats, or requirements. It fails to compensate for the low schema coverage, leaving key parameters ambiguous.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves a specific role by ID and gets its attributes, which is clear but somewhat vague. It repeats 'Get specific role by id' from the title, making it partially tautological. It doesn't distinguish this tool from sibling tools like 'retrieveRoleList' or 'retrieveRoleAuditsList', which reduces its effectiveness.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'retrieveRoleList' for listing roles or 'retrieveRoleAuditsList' for audit logs, nor does it specify prerequisites such as needing a role ID. This leaves the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveRoleAuditsListList history about your roles (audit)CRead-onlyInspect
List history about your roles (audit) - List and filter the history about your roles.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| roleId | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, which the description doesn't contradict (it describes a listing/filtering operation). However, the description adds minimal behavioral context beyond annotations—it hints at filtering capabilities but doesn't detail aspects like pagination behavior (implied by page/size parameters), rate limits, authentication needs, or what 'history' includes (e.g., timestamps, user actions). With annotations covering safety, it earns a baseline score for not misleading, but adds limited value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but repetitive and under-specified. It wastes words by restating the title ('List history about your roles (audit)') and adds a vague clause ('List and filter the history about your roles'). While not verbose, it lacks informative content, making it inefficient rather than concise. A single, clearer sentence would be more effective.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (10 parameters, 0% schema coverage, no output schema), the description is inadequate. It doesn't explain what the tool returns (e.g., audit log entries), how filtering works, or usage context. Annotations provide basic safety info, but for a tool with many undocumented parameters and no output schema, the description should offer more semantic guidance to be complete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description mentions 'filter' generically but doesn't explain any of the 10 parameters (e.g., roleId for filtering by role, startDate/endDate for date ranges, page/size for pagination). It fails to compensate for the schema gap, leaving parameter meanings unclear beyond their names.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is tautological, essentially restating the title ('List history about your roles (audit) - List and filter the history about your roles'). It doesn't specify what 'history' entails (e.g., audit logs of role changes) or differentiate from sibling audit tools like retrieveRoleList (which likely lists current roles). The verb 'list and filter' is generic and lacks specificity about the resource scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives is provided. Sibling tools include retrieveRoleList (for current roles) and other audit tools (e.g., retrieveAssignmentsAuditsList), but the description doesn't clarify this tool's specific context (e.g., for auditing role modifications vs. viewing active roles). It mentions filtering but doesn't specify typical use cases or prerequisites.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveRoleListList rolesCRead-onlyInspect
List roles - List and filter all your roles. A role allows you to specify permission to api endpoints and resources like compute.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| page | No | ||
| size | No | ||
| type | No | ||
| apiName | No | ||
| orderBy | No | ||
| tagName | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds that it 'List and filter all your roles', which implies it returns a collection, but doesn't disclose behavioral traits like pagination behavior, rate limits, authentication requirements, or what 'all your roles' means in terms of scope. With annotations covering safety, a 3 is appropriate—the description adds minimal context beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core action ('List roles - List and filter all your roles.'), followed by a clarifying sentence about roles. There's no wasted text, but it could be more structured (e.g., separating purpose from parameter hints).
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (9 parameters, 0% schema coverage, no output schema) and annotations only covering safety, the description is incomplete. It doesn't explain the return format, pagination, filtering logic, or error handling. For a list/filter tool with many undocumented parameters, this leaves significant gaps for the agent to infer behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 9 parameters are documented in the schema. The description mentions 'filter' but doesn't explain which parameters (name, type, apiName, tagName, etc.) correspond to filtering, what 'orderBy' does, or the purpose of pagination (page, size) and tracing headers (x-request-id, x-trace-id). It fails to compensate for the lack of schema documentation, leaving most parameters ambiguous.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List and filter') and resource ('roles'), and explains what a role is ('specify permission to api endpoints and resources like compute'). However, it doesn't explicitly differentiate from sibling tools like 'retrieveRole' or 'createRole', which would be needed for a score of 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention when filtering is appropriate, when to use pagination parameters, or how this differs from 'retrieveRole' (singular) or other list tools. The agent must infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveSecretGet specific secret by idCRead-onlyInspect
Get specific secret by id - Get attributes values for a specific secret on your account.
| Name | Required | Description | Default |
|---|---|---|---|
| secretId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds that it retrieves 'attributes values,' which provides some behavioral context beyond annotations, such as what information is returned. However, it lacks details on permissions, rate limits, or error handling, which would be useful given no output schema.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, stating the core purpose in a single sentence. However, it's slightly repetitive ('Get specific secret by id - Get attributes values...'), which could be streamlined. Overall, it's efficient but not perfectly concise.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity of a retrieval tool with 3 parameters (2 required) and 0% schema coverage, the description is insufficient. It lacks details on parameter meanings, output format (no output schema), and behavioral nuances like error cases or authentication needs. Sibling tools suggest this is part of a larger system, but the description doesn't integrate that context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description mentions 'by id,' which hints at the 'secretId' parameter, but it doesn't explain the other parameters (x-request-id and x-trace-id), their purposes, or formats. This leaves significant gaps in understanding the inputs.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves a specific secret by ID and gets its attribute values, which clarifies the verb and resource. However, it's somewhat vague about what 'attributes values' entails and doesn't explicitly differentiate from sibling tools like 'retrieveSecretList' or 'retrieveSecretAuditsList', which appear to handle lists or audit logs of secrets.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention using 'retrieveSecretList' for listing secrets or 'retrieveSecretAuditsList' for audit logs, nor does it specify prerequisites like needing a secret ID. The description only restates the basic functionality without contextual usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveSecretAuditsListList history about your secrets (audit)CRead-onlyInspect
List history about your secrets (audit) - List and filters the history about your secrets.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| endDate | No | ||
| orderBy | No | ||
| secretId | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds minimal behavioral context by implying filtering capabilities ('List and filters'), but doesn't disclose details like pagination behavior, rate limits, authentication needs, or what 'audit' specifically includes (e.g., change logs). With annotations covering safety, it adds some value but lacks rich behavioral disclosure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core purpose. However, it's slightly redundant ('List history about your secrets (audit) - List and filters the history about your secrets'), which could be trimmed for efficiency. Overall, it's concise but could be more structured to avoid repetition.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (10 parameters, 0% schema coverage, no output schema) and annotations only covering read/destructive hints, the description is incomplete. It doesn't explain parameter usage, return values, or behavioral nuances like pagination or filtering logic. For a tool with many undocumented parameters, this leaves significant gaps for an AI agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description mentions 'List and filters' but doesn't explain any of the 10 parameters (e.g., page, size, secretId, startDate). It fails to compensate for the schema gap, leaving parameters undocumented and their semantics unclear beyond basic types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists and filters secret audit history, which is a clear purpose. However, it's somewhat vague about what 'history about your secrets' entails (e.g., audit logs of changes) and doesn't explicitly differentiate from sibling audit tools like retrieveSecretList or retrieveSecret, which might list secrets themselves rather than their audit history.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing secret IDs), exclusions, or comparisons to other audit tools in the sibling list (e.g., retrieveSecretAuditsList vs retrieveAssignmentsAuditsList). The description only restates the purpose without contextual usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveSecretListList secretsCRead-onlyInspect
List secrets - List and filter all secrets in your account.
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| page | No | ||
| size | No | ||
| type | No | ||
| orderBy | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds that it lists 'all secrets in your account' and mentions filtering, which provides useful context about scope and functionality. However, it doesn't describe pagination behavior (implied by page/size parameters), rate limits, authentication requirements, or return format. With annotations covering safety, the description adds moderate value but lacks rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is very concise - just one sentence with no wasted words. It's front-loaded with the core purpose ('List secrets') and adds a brief elaboration. However, the repetition ('List secrets - List and filter...') is slightly redundant, and the brevity comes at the cost of completeness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 7 parameters (one required), 0% schema description coverage, no output schema, and no annotations beyond basic safety hints, the description is inadequate. It doesn't explain parameter usage, return values, pagination, filtering logic, or error conditions. The agent would struggle to use this tool correctly without guessing about parameter semantics and expected behavior.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 7 parameters have descriptions in the schema. The description only mentions filtering generically ('filter all secrets'), which partially explains the 'name' and 'type' parameters but ignores 'page', 'size', 'orderBy', 'x-trace-id', and the required 'x-request-id'. It doesn't explain what these parameters do, their formats, or constraints. The description fails to compensate for the complete lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool's purpose as 'List secrets - List and filter all secrets in your account.' This is clear but somewhat vague. It specifies the verb ('List') and resource ('secrets'), but doesn't distinguish it from sibling tools like 'retrieveSecret' or 'retrieveSecretAuditsList' beyond mentioning filtering capability. The description is functional but lacks specificity about what makes this listing tool unique.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions filtering but doesn't specify when filtering is appropriate or when other tools like 'retrieveSecret' (for individual secrets) or 'retrieveSecretAuditsList' (for audit logs) should be used instead. There are no prerequisites, exclusions, or explicit alternatives mentioned.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveSnapshotRetrieve a specific snapshot by idCRead-onlyInspect
Retrieve a specific snapshot by id - Get all attributes for a specific snapshot
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| snapshotId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and non-destructive operations, which the description aligns with by using 'retrieve' and 'get'. However, the description adds minimal behavioral context beyond annotations—it doesn't specify error conditions (e.g., invalid IDs), authentication needs, rate limits, or response format. With annotations covering safety, it meets a baseline but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, stating the core action in a single sentence. However, it's slightly redundant ('Retrieve... - Get...'), and the second part adds little value. Overall, it's efficient but could be tighter.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage, no output schema, and annotations only covering read-only/destructive hints, the description is incomplete. It doesn't explain parameter roles, error handling, or return values, leaving significant gaps for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden for parameter meaning. It only mentions 'id' (implied as snapshotId) but doesn't explain the three required parameters (x-request-id, instanceId, snapshotId) or the optional x-trace-id. This leaves key parameters undocumented, failing to compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool's purpose as retrieving a specific snapshot by ID and getting all its attributes, which is clear but somewhat vague. It distinguishes from sibling tools like 'retrieveSnapshotList' by focusing on a single snapshot, but doesn't explicitly contrast with other retrieval tools like 'retrieveInstance' or 'retrieveImage' that follow similar patterns.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites (e.g., needing a valid snapshot ID), when this tool is appropriate compared to 'retrieveSnapshotList' for listing multiple snapshots, or any constraints on usage.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveSnapshotListList snapshotsBRead-onlyInspect
List snapshots - List and filter all your snapshots for a specific instance
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| page | No | ||
| size | No | ||
| orderBy | No | ||
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and non-destructive behavior, which the description does not contradict. The description adds context about filtering and instance-specific scope, but lacks details on pagination behavior (implied by 'page' and 'size' parameters), rate limits, or authentication needs. With annotations covering safety, this adds some value but not rich behavioral context.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence that front-loads the core action. It avoids redundancy but could be slightly more structured to separate listing from filtering aspects.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read-only tool with annotations, the description is minimally adequate. However, with 7 parameters (0% schema coverage) and no output schema, it fails to explain parameter meanings or return values. Given the complexity, it should provide more guidance on usage and parameters.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden. It mentions filtering and instance-specific scope, which hints at 'instanceId' and possibly 'name', but does not explain the purpose of other parameters like 'page', 'size', 'orderBy', 'x-request-id', or 'x-trace-id'. This leaves most parameters undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('List') and resource ('snapshots'), and specifies the scope ('for a specific instance'). However, it does not explicitly differentiate from sibling tools like 'retrieveSnapshot' or 'retrieveSnapshotsAuditsList', which might have overlapping purposes.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description mentions filtering ('filter all your snapshots') and targeting a specific instance, but provides no explicit guidance on when to use this tool versus alternatives like 'retrieveSnapshot' (for a single snapshot) or 'retrieveSnapshotsAuditsList' (for audit logs). No exclusions or prerequisites are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveSnapshotsAuditsListList history about your snapshots (audit) triggered via the APICRead-onlyInspect
List history about your snapshots (audit) triggered via the API - List and filters the history about your snapshots your triggered via the API.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| instanceId | No | ||
| snapshotId | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows it's a safe read operation. The description adds that it lists 'history' and 'filters,' implying temporal and attribute-based querying, which provides some behavioral context beyond annotations. However, it lacks details on pagination, rate limits, or authentication needs.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive ('List history... - List and filters...') and could be condensed. It's front-loaded but wastes words on redundancy instead of adding value. A single clear sentence would suffice, making this inefficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 11 parameters with 0% schema coverage, no output schema, and annotations only covering safety, the description is inadequate. It doesn't explain parameter usage, return values, or error conditions, leaving significant gaps for a complex filtering tool.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description only mentions 'filters' generically without explaining any of the 11 parameters (e.g., page, size, dates, IDs). This fails to compensate for the schema gap, leaving most parameter meanings unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists history about snapshots triggered via the API, which clarifies the verb (list) and resource (snapshot audit history). However, it's repetitive ('List history... - List and filters...') and doesn't explicitly differentiate from sibling audit tools like retrieveSnapshotList or other *_auditsList tools, leaving ambiguity about scope.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description mentions filtering but doesn't specify criteria or compare to other audit tools (e.g., retrieveSnapshotList for current snapshots vs. this for historical audits). This leaves the agent without context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveTagGet specific tag by idCRead-onlyInspect
Get specific tag by id - Get attributes values to a specific tag on your account.
| Name | Required | Description | Default |
|---|---|---|---|
| tagId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds that it retrieves 'attributes values' for a tag, which provides some context about what data is returned. However, it doesn't disclose rate limits, authentication needs, or error behaviors beyond what annotations cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive ('Get specific tag by id - Get attributes values...') and awkwardly phrased. It could be more front-loaded and concise, such as 'Retrieve a tag's attributes by its ID.' The current structure adds little value beyond the title.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a read operation with no output schema, the description should clarify what 'attributes values' includes (e.g., tag name, color, metadata). With 3 parameters and 0% schema coverage, it lacks details on parameter usage and return values, making it incomplete for effective tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description mentions 'by id' which hints at the 'tagId' parameter, but doesn't explain 'x-request-id' or 'x-trace-id' (e.g., their purposes as request identifiers or optional tracing). It fails to compensate for the low schema coverage, leaving two parameters unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states 'Get specific tag by id' which clarifies the verb (get) and resource (tag), but it's somewhat vague about what 'get attributes values' means. It distinguishes from siblings like 'retrieveTagList' by focusing on a single tag, but doesn't clearly differentiate from 'retrieveTagAuditsList' or 'updateTag' beyond the basic operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance on when to use this tool versus alternatives like 'retrieveTagList' for listing tags or 'retrieveTagAuditsList' for audit logs. The description implies it's for fetching a single tag by ID, but doesn't specify prerequisites (e.g., needing a valid tag ID) or exclusions (e.g., not for creating or updating tags).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveTagAuditsListList history about your assignments (audit)CRead-onlyInspect
List history about your assignments (audit) - List and filters the history about your assignments.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| tagId | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds no behavioral context beyond this, such as pagination details, rate limits, or authentication needs. Since annotations cover safety, the bar is lower, but the description doesn't enhance understanding of behavior beyond the basics.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but repetitive, stating 'List history about your assignments (audit)' twice with minor variation. It's front-loaded but wastes words on redundancy. While not overly verbose, it could be more efficient by eliminating the tautological restatement.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 10 parameters (0% schema coverage), no output schema, and no annotations beyond read/destructive hints, the description is inadequate. It doesn't explain parameter usage, return values, or behavioral nuances like pagination or error handling. The complexity of the tool warrants more comprehensive guidance than provided.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 10 parameters are documented in the schema. The description mentions 'filters' but doesn't specify which parameters correspond to filtering (e.g., tagId, startDate, endDate) or explain their purposes. It fails to compensate for the lack of schema documentation, leaving parameters largely unexplained.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists and filters history about assignments, which clarifies the verb (list/filter) and resource (assignment history/audit). However, it's somewhat vague about what 'history about your assignments' entails and doesn't distinguish it from sibling tools like 'retrieveAssignmentsAuditsList' or 'retrieveAssignmentList', which appear related but are not explicitly differentiated.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description repeats the action without mentioning prerequisites, context, or exclusions. Given the many sibling tools with similar naming (e.g., retrieveAssignmentsAuditsList), this lack of differentiation is a significant gap.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveTagListList tagsCRead-onlyInspect
List tags - List and filter all tags in your account
| Name | Required | Description | Default |
|---|---|---|---|
| name | No | ||
| page | No | ||
| size | No | ||
| orderBy | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate read-only and non-destructive behavior, which the description doesn't contradict. The description adds minimal context by implying filtering capabilities, but it doesn't detail behavioral aspects like pagination (implied by 'page' and 'size' parameters), rate limits, authentication needs, or return format. With annotations covering safety, it adds some value but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core function, but it's repetitive ('List tags - List and filter'). It wastes no space on unnecessary details, though it could be more structured by explicitly separating listing and filtering aspects.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 6 parameters with 0% schema coverage, no output schema, and annotations only covering read-only/destructive hints, the description is incomplete. It doesn't explain parameter usage, return values, error conditions, or operational constraints, leaving significant gaps for an agent to use the tool effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate, but it only vaguely mentions filtering without explaining parameters. It doesn't clarify what 'name', 'page', 'size', 'orderBy', 'x-trace-id', or 'x-request-id' do or how to use them. For 6 parameters with no schema descriptions, this is inadequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('List and filter') and resource ('tags in your account'), which is clear but vague. It doesn't specify what filtering capabilities exist or how they work, and it doesn't differentiate from sibling tools like 'retrieveTag' or 'createTag' beyond the basic list vs. retrieve/create distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites, typical use cases, or compare it to other tag-related tools like 'retrieveTag' (for single tag) or 'createTag' (for creation). The description only restates the tool's function without context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveUserGet specific user by idCRead-onlyInspect
Get specific user by id - Get attributes for a specific user.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description doesn't contradict this (it uses 'Get', not a write term). However, it adds no behavioral context beyond what annotations provide—no mention of authentication needs, rate limits, error conditions, or what happens if the user doesn't exist. With annotations covering safety, a baseline 3 is appropriate.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but inefficiently structured—it repeats 'Get specific user by id' unnecessarily. The front-loading is adequate (purpose is clear early), but the second phrase adds little value. It could be more concise by merging ideas into a single sentence without redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 3 parameters with 0% schema coverage, no output schema, and annotations only covering read/destructive hints, the description is incomplete. It doesn't explain parameter roles, return values (what 'attributes' include), or error handling. For a tool with multiple parameters and no output schema, this leaves significant gaps for an agent to use it correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description only mentions 'id' (implied as userId), ignoring x-request-id and x-trace-id. It doesn't explain what these parameters are for (e.g., request tracing), their formats, or why they're required. This fails to compensate for the low schema coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves user attributes by ID, which is a clear purpose. However, it's somewhat vague ('Get attributes' doesn't specify what attributes) and doesn't distinguish from sibling tools like retrieveUserList or retrieveUserAuditsList. The repetition ('Get specific user by id - Get attributes...') adds noise but doesn't obscure the basic function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like retrieveUserList (for multiple users) or retrieveUserAuditsList (for audit data). The description implies it's for fetching a single user by ID, but there's no explicit comparison to sibling tools or context about prerequisites (e.g., needing a valid userId).
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveUserAuditsListList history about your users (audit)CRead-onlyInspect
List history about your users (audit) - List and filter the history about your users.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| userId | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds no behavioral context beyond what annotations provide—no mention of pagination behavior, rate limits, authentication needs, or what constitutes 'history'—resulting in minimal added value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded, stating the core purpose in a single sentence. However, it repeats 'List history about your users' unnecessarily, slightly reducing efficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 10 parameters, 0% schema coverage, no output schema, and no annotations beyond read/destructive hints, the description is inadequate. It lacks details on parameter usage, return values, error handling, or how it fits among sibling audit tools, leaving significant gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 10 parameters, the description fails to compensate. It mentions filtering but doesn't explain any parameters (e.g., page/size for pagination, userId for filtering, date formats), leaving semantics entirely undocumented beyond the schema's basic types.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool lists and filters user audit history, which is a clear purpose. However, it's somewhat vague about what 'history about your users' entails and doesn't distinguish this from sibling audit tools like retrieveAssignmentsAuditsList or retrieveInstancesAuditsList, which have similar patterns for different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites, appropriate contexts, or differentiate from other audit tools in the sibling list, leaving the agent to infer usage from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveUserClientGet clientDRead-onlyInspect
Get client - Get idm client.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, which already tell the agent this is a safe read operation. The description adds no behavioral context beyond what annotations provide - no information about authentication requirements, rate limits, what 'idm client' means, or response format. However, it doesn't contradict annotations, so it gets the baseline score for annotations covering safety profile.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While technically concise (two short phrases), this is under-specification rather than effective conciseness. The description wastes its limited space on repetition ('Get client - Get idm client') instead of providing meaningful information. Every sentence should earn its place, and this doesn't.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a retrieval tool with annotations covering safety but 0% schema description coverage and no output schema, the description is incomplete. It doesn't explain what 'idm client' means, what data is returned, or document the required parameters. The agent would struggle to use this tool effectively despite having annotations.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are completely undocumented in the schema. The description provides no information about the two parameters (x-trace-id and x-request-id), their purposes, or why they're needed. With 0% coverage and no parameter explanation in the description, this is inadequate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Get client - Get idm client' is tautological, essentially restating the name/title with slightly different wording. It doesn't specify what 'client' refers to (IDM client? API client? User client?) or what information is retrieved. While it distinguishes from non-retrieval siblings, it lacks the specificity needed for clear understanding.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided about when to use this tool versus alternatives. There are many sibling retrieval tools (retrieveUser, retrieveRole, retrieveInstance, etc.), but the description offers no context about when this specific 'client' retrieval is appropriate versus other retrieval operations. The agent receives no usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveUserIsPasswordSetGet user is password set statusDRead-onlyInspect
Get user is password set status - Get info about idm user if the password is set.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description does not contradict these annotations, as 'Get' aligns with read-only. However, it adds minimal behavioral context beyond annotations—it mentions 'idm user' but does not clarify authentication needs, rate limits, or what 'password is set' entails (e.g., boolean response). No contradiction exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive and inefficient ('Get user is password set status - Get info about idm user if the password is set'). It uses two nearly identical phrases without adding value. While short, it is not front-loaded with critical information and wastes space on redundancy rather than clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (a read operation with 3 parameters, 0% schema coverage, no output schema), the description is inadequate. It does not explain the return value (e.g., boolean or structured data), parameter usage, or how it differs from similar tools. With annotations covering safety but no other context, the description leaves significant gaps for an agent to understand and invoke the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the three parameters (userId, x-trace-id, x-request-id) are documented in the schema. The description provides no information about these parameters—it does not explain what userId refers to, the purpose of x-request-id, or how they affect the query. This fails to compensate for the lack of schema documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is tautological, essentially restating the tool name and title ('Get user is password set status - Get info about idm user if the password is set'). It does not provide a clear, specific verb+resource combination that distinguishes it from sibling tools like 'retrieveUser' or 'retrieveUserList'. The purpose is implied but not explicitly articulated beyond the name.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
There is no guidance on when to use this tool versus alternatives. It does not mention sibling tools (e.g., 'retrieveUser' for general user info) or specify contexts where checking password set status is appropriate. The description lacks any usage instructions, prerequisites, or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveUserListList usersCRead-onlyInspect
List users - List and filter all your users.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| No | |||
| owner | No | ||
| enabled | No | ||
| orderBy | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, so the agent knows this is a safe read operation. The description adds minimal context by implying filtering capabilities, but doesn't disclose behavioral traits like pagination (suggested by 'page' and 'size' parameters), rate limits, authentication needs, or return format. With annotations covering safety, the description adds some value but lacks rich behavioral details.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two short phrases: 'List users - List and filter all your users.' It's front-loaded and wastes no words, but could be more structured (e.g., separating purpose from features). It earns its place by stating the core action and hinting at functionality, though it's slightly repetitive.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (8 parameters, 0% schema coverage, no output schema) and annotations (read-only, non-destructive), the description is incomplete. It doesn't explain return values, pagination, filtering logic, or error handling. For a list tool with multiple parameters, more context is needed to guide effective use, making it insufficient for the tool's requirements.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'filter all your users,' which loosely relates to parameters like 'email,' 'owner,' and 'enabled,' but doesn't explain their semantics, usage, or interactions. With 8 parameters (1 required) and no schema descriptions, the description adds minimal meaning beyond the schema, failing to adequately clarify parameter purposes.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool's purpose as 'List users - List and filter all your users,' which is clear but vague. It specifies the verb ('list') and resource ('users'), but lacks detail on scope or differentiation from sibling tools like 'retrieveUser' or 'retrieveUserList' (if any, though not listed, the generic 'list' action might overlap with other list tools). It doesn't explicitly distinguish from alternatives, making it adequate but not specific.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It mentions filtering ('filter all your users'), but doesn't specify when to choose this over other list tools (e.g., 'retrieveInstancesList' for instances) or when filtering is appropriate. No exclusions or prerequisites are stated, leaving usage unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveVipGet specific VIP by ipCRead-onlyInspect
Get specific VIP by ip - Get attributes values to a specific VIP on your account.
| Name | Required | Description | Default |
|---|---|---|---|
| ip | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a read-only, non-destructive operation, which the description doesn't contradict. The description adds minimal context by specifying retrieval by IP, but it doesn't provide additional behavioral details like error conditions, rate limits, or authentication requirements beyond what annotations cover.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but somewhat redundant ('Get specific VIP by ip' is repeated). It's front-loaded with the core purpose, but the second phrase adds little value, making it less efficient than it could be.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a tool with 3 parameters, 0% schema coverage, no output schema, and no annotations beyond basic hints, the description is inadequate. It doesn't explain what 'attributes values' are returned, how errors are handled, or the purpose of the 'x-' parameters, leaving significant gaps for the agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden for explaining parameters. It only mentions the 'ip' parameter, ignoring 'x-request-id' and 'x-trace-id'. This leaves two parameters undocumented, failing to compensate for the schema gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool retrieves a specific VIP by IP address, which is a clear purpose. However, it's somewhat vague about what 'get attributes values' means, and it doesn't distinguish this tool from sibling tools like 'retrieveVipList' or 'retrieveVipAuditsList' that also involve VIPs.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description doesn't mention sibling tools like 'retrieveVipList' (for listing VIPs) or 'retrieveVipAuditsList' (for audit logs), leaving the agent without context for tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveVipAuditsListList history about your VIPs (audit)CRead-onlyInspect
List history about your VIPs (audit) - List and filters the history about your VIPs.
| Name | Required | Description | Default |
|---|---|---|---|
| page | No | ||
| size | No | ||
| vipId | No | ||
| endDate | No | ||
| orderBy | No | ||
| changedBy | No | ||
| requestId | No | ||
| startDate | No | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, which the description does not contradict. However, the description adds minimal behavioral context beyond this—it mentions filtering but does not specify what types of audit events are included, any rate limits, authentication requirements, or pagination behavior. With annotations covering safety, the description provides some value but lacks depth.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but inefficient—it repeats 'List history about your VIPs' and uses redundant phrasing ('List and filters'). While front-loaded, it wastes words on tautology rather than providing useful information, making it less concise than it could be.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (10 parameters, 0% schema coverage, no output schema) and annotations that only cover safety, the description is inadequate. It does not explain what the tool returns, how filtering works, or parameter usage, leaving significant gaps for an audit-listing tool with many inputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 10 parameters are documented in the schema. The description only vaguely mentions 'filters' without explaining any parameters like vipId, startDate, endDate, or pagination controls (page, size). It fails to compensate for the complete lack of schema documentation, leaving parameters semantically unclear.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description restates the title ('List history about your VIPs') and adds a tautological phrase ('List and filters the history about your VIPs'), which merely repeats the name/function without providing specific details about what 'history' entails or how it differs from other audit tools like retrieveInstancesAuditsList or retrieveDomainsAuditsList. It fails to clearly distinguish this tool from its siblings.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. The description does not mention any context, prerequisites, or exclusions, leaving the agent without direction on appropriate usage scenarios compared to other audit-related tools in the sibling list.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
retrieveVipListList VIPsCRead-onlyInspect
List VIPs - List and filter all vips in your account
| Name | Required | Description | Default |
|---|---|---|---|
| ip | No | ||
| ips | No | ||
| page | No | ||
| size | No | ||
| type | No | ||
| region | No | ||
| orderBy | No | ||
| ipVersion | No | ||
| dataCenter | No | ||
| resourceId | No | ||
| x-trace-id | No | ||
| resourceName | No | ||
| resourceType | No | ||
| x-request-id | Yes | ||
| resourceDisplayName | No |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=true and destructiveHint=false, covering safety. The description adds minimal behavioral context—it mentions filtering, which hints at query capabilities, but doesn't detail pagination behavior, rate limits, authentication needs, or what 'VIPs' represent. With annotations present, the bar is lower, but the description offers only basic supplemental value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise but under-specified—it's a single sentence with no wasted words, yet it lacks necessary detail. While front-loaded, it doesn't earn its place by adding meaningful value beyond the tool name, making it inefficient rather than appropriately brief.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (15 parameters, 0% schema coverage, no output schema) and minimal annotations, the description is inadequate. It doesn't explain what VIPs are, how filtering works, return format, or error handling. For a list/filter tool with many undocumented parameters, this leaves significant gaps for an agent to operate effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the 15 parameters are documented in the schema. The description only vaguely mentions 'filter' without explaining any parameters like 'ip', 'type', 'region', or pagination fields. It fails to compensate for the complete lack of schema documentation, leaving parameters semantically opaque.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'List VIPs - List and filter all vips in your account' is tautological, essentially restating the tool name and title without adding specificity. It mentions 'filter' but doesn't clarify what VIPs are or what filtering entails, failing to distinguish this tool from other list/retrieve siblings like retrieveVip or retrieveVipAuditsList.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With siblings like retrieveVip (likely for single VIP retrieval) and retrieveVipAuditsList (likely for audit logs), the description offers no explicit or implied context for selection, leaving the agent to guess based on naming patterns alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revokeCancelDomainRevoke cancellation for a specific domainCDestructiveInspect
Revoke cancellation for a specific domain - Revoke cancellation for a specific domain
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
The annotations indicate this is a destructive, non-read-only operation (destructiveHint: true, readOnlyHint: false). The description doesn't contradict these annotations, and while it doesn't explicitly add behavioral details beyond what annotations provide, the annotations themselves give clear safety information. The description doesn't add context about side effects, permissions needed, or rate limits, but with good annotation coverage, the bar is lower.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is technically concise but severely under-specified. It wastes space by repeating the same phrase twice with a dash separator, which adds no information. This isn't effective conciseness—it's inadequate documentation masquerading as brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with 3 parameters (2 required) and 0% schema description coverage, this description is woefully incomplete. There's no output schema, no parameter explanations, no usage context, and minimal behavioral disclosure beyond what annotations provide. The description doesn't adequately support tool selection or invocation given the complexity.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The schema description coverage is 0%, meaning none of the three parameters (domain, x-request-id, x-trace-id) have descriptions in the schema. The tool description provides absolutely no information about what these parameters mean, their formats, or how they should be used. The description fails completely to compensate for the schema's lack of documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is a tautology that merely repeats the title ('Revoke cancellation for a specific domain - Revoke cancellation for a specific domain'). It doesn't provide any additional clarity about what the tool actually does beyond restating the name/title. While the verb 'revoke cancellation' is somewhat specific, the description adds no value.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides absolutely no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (like the domain must be in a cancelled state), doesn't reference the sibling 'cancelDomain' tool, and gives no context about appropriate use cases or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
revokeDomainTransferOutRevoke transfer out for a domainCDestructiveInspect
Revoke transfer out for a domain - Revoke transfer out for a domain
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which correctly align with the 'revoke' action implying a destructive write operation. The description adds no behavioral context beyond what annotations provide (no rate limits, auth needs, or side effects described). No contradiction exists, but no additional value is added.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is repetitive ('Revoke transfer out for a domain - Revoke transfer out for a domain'), wasting space without adding clarity. It's not front-loaded with useful information; it merely echoes the title in a redundant format.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 3 parameters (0% schema coverage) and no output schema, the description is inadequate. It doesn't explain the operation's effect, required domain state, error conditions, or return values. Given the complexity and lack of structured documentation, more context is needed.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description provides no information about the three parameters (domain, x-request-id, x-trace-id), such as what 'domain' expects (name, ID, format) or the purpose of the request/trace IDs. It fails to compensate for the schema's lack of descriptions.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is a tautology that merely restates the title ('Revoke transfer out for a domain - Revoke transfer out for a domain'). It doesn't explain what 'transfer out' means or what the revocation actually does. While the verb 'revoke' is clear, the resource 'transfer out for a domain' is ambiguous without context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives like 'cancelDomain' or 'revokeCancelDomain'. The description doesn't mention prerequisites (e.g., domain must be in transfer-out state) or when revocation is appropriate versus other domain management operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
rollbackSnapshotRevert the instance to a particular snapshot based on its identifierBDestructiveInspect
Revert the instance to a particular snapshot based on its identifier - Rollback the instance to a specific snapshot. In case the snapshot is not the latest one, it will automatically delete all the newer snapshots of the instance
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| snapshotId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| rollbackSnapshotBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description aligns with by implying data mutation. The description adds valuable context beyond annotations: it explicitly states that rolling back to a non-latest snapshot will automatically delete all newer snapshots, which is critical behavioral information not captured in annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is two sentences that are somewhat repetitive ('revert' and 'rollback' convey similar ideas). While not verbose, the repetition slightly reduces efficiency. It is front-loaded with the core action.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive operation with no output schema and 0% parameter documentation, the description does address a key behavioral trait (deletion of newer snapshots). However, it lacks details on prerequisites, permissions, error conditions, or what happens to the instance state, leaving important gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 5 parameters (including nested objects), the description provides no information about what parameters like 'instanceId', 'snapshotId', 'x-request-id', or 'rollbackSnapshotBody' mean or how to use them. This leaves significant gaps in understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('revert', 'rollback') and target ('instance', 'snapshot'), making the purpose understandable. However, it doesn't explicitly differentiate from sibling tools like 'createSnapshot' or 'deleteSnapshot', which would require a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives like 'reinstallInstance' or 'restore from snapshot' operations. It mentions a behavioral consequence (deleting newer snapshots) but doesn't frame this as usage advice.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
setDefaultHandleSet default handleDDestructiveInspect
Set default handle - Set default handle
| Name | Required | Description | Default |
|---|---|---|---|
| handleId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, clearly signaling a non-read-only, destructive operation. The description doesn't contradict these annotations, but it adds no behavioral context beyond them—such as what 'destructive' entails (e.g., overriding a previous default), authentication needs, or rate limits. With annotations covering safety, the description adds minimal value.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description 'Set default handle - Set default handle' is repetitive and under-specified, not concise. It wastes space with redundancy instead of providing useful information, making it inefficient and poorly structured.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with 3 parameters, 0% schema coverage, no output schema, and multiple sibling tools, the description is inadequate. It doesn't explain the tool's purpose, parameters, or behavior sufficiently, leaving significant gaps for the agent to navigate.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the schema provides no parameter descriptions. The description offers no information about parameters like 'handleId', 'x-request-id', or 'x-trace-id', failing to compensate for the lack of schema documentation. This leaves the agent guessing about parameter purposes and formats.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Set default handle - Set default handle' is a tautology that merely restates the name and title without explaining what a 'handle' is or what 'default' means in this context. It lacks a specific verb-resource combination and doesn't distinguish this tool from sibling tools like 'createHandle', 'updateHandle', or 'removeHandle'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing handle), exclusions, or relationships to sibling tools like 'listHandles' or 'retrieveHandle', leaving the agent with no context for appropriate invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
shutdownShutdown compute instance / resource by its idADestructiveInspect
Shutdown compute instance / resource by its id - Shutdown an compute instance / resource. This is similar to pressing the power button on a physical machine. This will send an ACPI event for the guest OS, which should then proceed to a clean shutdown.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description aligns with by describing a shutdown action. The description adds valuable context beyond annotations by explaining that it sends an ACPI event for a clean shutdown, similar to pressing a power button, which clarifies the behavioral impact without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action and uses two sentences efficiently to explain the shutdown mechanism. It avoids unnecessary details, though it could be slightly more structured by separating usage notes from behavioral explanations.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and low parameter coverage, the description provides adequate behavioral context but lacks details on error conditions, response format, or prerequisites. It covers the shutdown action well but leaves gaps in full operational understanding.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It only mentions 'instanceId' as the resource identifier, but does not explain the semantics of 'x-request-id' or 'x-trace-id' parameters, leaving them unexplained and reducing clarity.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Shutdown'), target resource ('compute instance / resource'), and mechanism ('by its id'), making the purpose specific and unambiguous. It distinguishes from siblings like 'stop' or 'restart' by emphasizing the clean shutdown process via ACPI event.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage for clean shutdowns, but does not explicitly state when to use this tool versus alternatives like 'stop' or 'restart' from the sibling list. It provides some context about the shutdown behavior but lacks explicit guidance on prerequisites or exclusions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
startStart a compute instance / resource identified by its idBDestructiveInspect
Start a compute instance / resource identified by its id - Starting a compute instance / resource is like powering on a real server. If the compute instance / resource is already started nothing will happen. You may check the current status anytime when getting information about a compute instance / resource.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description aligns with by describing a state-changing action. It adds valuable context: idempotency ('nothing will happen' if already started) and suggests checking status via another tool, enhancing understanding beyond annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core purpose, uses a clear analogy, and adds behavioral notes efficiently. It avoids redundancy but could be slightly tighter by merging the status check suggestion into one sentence.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema and poor parameter documentation, the description covers purpose and idempotency well but lacks details on permissions, side effects, or error handling. It's partially complete but misses key contextual elements for safe invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for 3 parameters, the description fails to explain any parameters. It mentions 'id' generically but doesn't clarify instanceId, x-request-id, or x-trace-id, leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Start') and target ('compute instance/resource'), with a helpful analogy to 'powering on a real server'. It distinguishes from siblings like 'stop' or 'restart' by focusing on initiation, though it doesn't explicitly name alternatives.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when needing to power on an instance, mentioning that nothing happens if already started and suggesting checking status via 'retrieveInstance'. However, it lacks explicit when-not-to-use guidance or named alternatives beyond the status check hint.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
stopStop compute instance / resource by its idADestructiveInspect
Stop compute instance / resource by its id - Stopping a compute instance / resource is like powering off a real server. So please be aware that data may be lost. Alternatively you may log in and shut your compute instance / resource gracefully via the operating system. If the compute instance / resource is already stopped nothing will happen. You may check the current status anytime when getting information about a compute instance / resource.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description aligns with by warning 'data may be lost.' It adds valuable context beyond annotations: the idempotent behavior ('If... already stopped nothing will happen') and the recommendation to check status with 'retrieveInstance,' enhancing transparency without contradiction.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is front-loaded with the core action and key warnings, followed by alternatives and behavioral notes. It avoids redundancy but includes a slightly verbose analogy ('like powering off a real server'), which could be tightened without losing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive tool with no output schema, the description covers critical aspects: purpose, risks, idempotency, and alternatives. It leverages annotations well but could detail error conditions or response formats. Given the complexity, it is largely complete but has minor gaps in parameter and output explanation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, but the description mentions 'by its id,' hinting at the 'instanceId' parameter. However, it does not explain the purpose or format of 'x-request-id' or 'x-trace-id.' With high parameter count (3) and no schema descriptions, the description provides minimal semantic value, meeting the baseline for partial coverage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description explicitly states the action ('Stop compute instance / resource') and specifies the target ('by its id'), making the purpose clear. It distinguishes from siblings like 'shutdown' by emphasizing the forceful nature of stopping versus graceful shutdown, and from 'start' as the opposite action.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides explicit guidance on when to use this tool ('Stopping... is like powering off a real server') and when not to use it ('Alternatively you may log in and shut... gracefully via the operating system'). It also mentions checking status with 'retrieveInstance' as an alternative for verification, covering usage context thoroughly.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
tool_searchTool SearchARead-onlyInspect
Search the available tool catalog to discover tools by name, description, or arguments. Use a regex pattern or natural language query to find relevant tools. Returns tool references that are automatically expanded into full tool definitions.
| Name | Required | Description | Default |
|---|---|---|---|
| mode | No | Search mode: "regex" for pattern matching, "bm25" for natural language. Defaults to "bm25". | |
| query | Yes | A regex pattern or natural-language query to search tool names, descriptions, and argument names. Maximum 200 characters. |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations already declare readOnlyHint=true, destructiveHint=false, and openWorldHint=false, covering safety and scope. The description adds useful context about what gets returned ('tool references that are automatically expanded into full tool definitions') and mentions the 200-character query limit, which provides additional behavioral insight beyond the annotations.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is perfectly concise with two sentences that each earn their place. The first sentence establishes purpose and scope, while the second explains the return behavior. There's zero wasted language or redundancy.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a search tool with good annotations (readOnly, non-destructive) and full schema coverage, the description provides adequate context. It explains what gets searched, the query types, and return format. The main gap is the lack of output schema, but the description compensates by explaining what gets returned ('tool references expanded into full tool definitions').
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 100% schema description coverage, the schema already fully documents both parameters (mode with enum values and query with character limit). The description mentions 'regex pattern or natural language query' which aligns with the schema but doesn't add significant semantic value beyond what's already in the structured fields.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose with specific verbs ('search', 'discover') and resources ('tool catalog', 'tools by name, description, or arguments'). It distinguishes itself from sibling tools by focusing on tool discovery rather than resource management, as all siblings are CRUD operations for specific resources like instances, domains, or users.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides clear context for when to use this tool ('to discover tools by name, description, or arguments') and mentions the search modes ('regex pattern or natural language query'). However, it doesn't explicitly state when NOT to use it or name specific alternative tools for different search needs.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
unassignInstancePrivateNetworkRemove instance from a Private NetworkCDestructiveInspect
Remove instance from a Private Network - Remove a specific instance from a Private Network
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| privateNetworkId | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description aligns with by implying a removal action. However, the description adds no additional behavioral context beyond what annotations provide, such as effects on network connectivity, permissions required, or error conditions. No contradiction with annotations is present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded, stating the core action clearly in the first phrase. The repetition adds emphasis but is slightly redundant. Overall, it is efficient with minimal waste, though it could be more structured with additional details.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool has 4 parameters with 0% schema coverage, no output schema, and annotations only cover safety hints, the description is incomplete. It lacks parameter explanations, usage context, and behavioral details needed for effective tool invocation, making it inadequate for a destructive operation with multiple inputs.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning parameters are undocumented in the schema. The description does not explain any parameters, such as what 'instanceId' and 'privateNetworkId' represent or the purpose of 'x-request-id'. This leaves significant gaps in understanding how to invoke the tool correctly.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the action ('Remove instance from a Private Network') and repeats it for emphasis, which clarifies the verb and resource. However, it does not differentiate from sibling tools like 'unassignIp' or 'deletePrivateNetwork', leaving ambiguity about when to use this specific removal operation versus other disassociation or deletion tools.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance is provided on when to use this tool versus alternatives. While the title and description imply it's for removing an instance from a private network, there is no mention of prerequisites, conditions, or comparison to siblings like 'assignInstancePrivateNetwork' for context on reversal or 'deletePrivateNetwork' for more destructive actions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
unassignIpUnassign a VIP to a VPS/VDS/Bare MetalCDestructiveInspect
Unassign a VIP to a VPS/VDS/Bare Metal - Unassign a VIP from an VPS/VDS/Bare Metal using the machine id.
| Name | Required | Description | Default |
|---|---|---|---|
| ip | Yes | ||
| resourceId | Yes | ||
| x-trace-id | No | ||
| resourceType | Yes | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description aligns with by implying a mutation ('Unassign'). However, the description adds minimal behavioral context beyond this—it doesn't specify effects like network disruption, permissions required, or error conditions. With annotations covering safety, it earns a baseline score for not contradicting them but lacks enrichment.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but somewhat redundant, repeating 'Unassign a VIP to a VPS/VDS/Bare Metal' in slightly different wording. It's front-loaded with the main action but could be more efficient by avoiding tautology and integrating parameter hints more clearly.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (5 parameters, 0% schema coverage, destructive operation, no output schema), the description is inadequate. It lacks details on parameter meanings, expected outcomes, error handling, or dependencies, leaving significant gaps for an agent to operate correctly in this context.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but fails to do so. It mentions 'machine id' (likely referring to resourceId) but ignores other required parameters like ip, resourceType, and x-request-id. This leaves key inputs undocumented, hindering effective tool use.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Unassign a VIP') and the target resources ('VPS/VDS/Bare Metal'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'assignIp' or 'unassignInstancePrivateNetwork', which would require mentioning it specifically removes IP assignments rather than network or other assignments.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., VIP must be assigned first), exclusions, or compare to related tools like 'assignIp' or 'unassignInstancePrivateNetwork', leaving the agent to infer usage context from the tool name alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
updateDnsZoneRecordUpdate DNS zone recordCDestructiveInspect
Update DNS zone record - Create resource record in a zone
| Name | Required | Description | Default |
|---|---|---|---|
| recordId | Yes | ||
| zoneName | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| updateDnsZoneRecordBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description doesn't contradict. However, it adds minimal behavioral context beyond annotations—it hints at record creation/modification but lacks details on permissions, side effects, or error handling. Since annotations cover safety, the bar is lower, but more context (e.g., impact on DNS resolution) would improve it.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but not front-loaded with useful information—it's a single, unclear sentence. While concise, it wastes space on tautology rather than providing value, making it inefficient despite its short length.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 5 parameters, 0% schema coverage, no output schema, and complex nested objects, the description is inadequate. It lacks details on behavior, parameters, return values, or error cases, failing to provide the completeness needed for safe and effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters like recordId, zoneName, and updateDnsZoneRecordBody fields (ttl, prio, data) are undocumented. The description adds no semantic meaning beyond the schema, failing to explain what these parameters represent (e.g., recordId for targeting, ttl for time-to-live). With low coverage, the description doesn't compensate.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update DNS zone record - Create resource record in a zone' is tautological, restating the name/title without clarifying the action. It confusingly mixes 'update' and 'create' without explaining whether this modifies existing records or creates new ones, failing to distinguish from sibling tools like createDnsZoneRecord or deleteDnsZoneRecord.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing zone), exclusions, or comparisons to sibling tools like createDnsZoneRecord or bulkDeleteDnsZoneRecords, leaving the agent with no contextual usage information.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
updateDomainUpdate a specific domainCDestructiveInspect
Update a specific domain - Update nameservers and handles for a specific domain
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| updateDomainBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description aligns with by implying mutation ('Update'). However, the description adds minimal behavioral context beyond annotations—it doesn't specify permission requirements, rate limits, or what 'destructive' entails (e.g., irreversible changes). No contradiction exists, but value beyond annotations is limited.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but repetitive ('Update a specific domain' appears twice). It front-loads the purpose but wastes words on redundancy. A single sentence like 'Update nameservers and handles for a specific domain' would suffice, making the current version slightly inefficient.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given 4 parameters with 0% schema coverage, no output schema, and destructive annotations, the description is inadequate. It doesn't explain parameter roles, expected outcomes, error conditions, or side effects. For a mutation tool with complex nested inputs, more context is needed to guide the agent effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate but fails to do so. It mentions 'nameservers and handles' but doesn't explain parameter meanings, required fields, or data formats (e.g., UUID patterns for 'x-request-id'). The agent cannot infer semantics for 'domain', 'x-request-id', or 'updateDomainBody' structure from the description alone.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update') and resource ('a specific domain'), and specifies what fields can be updated ('nameservers and handles'). It distinguishes from siblings like 'retrieveDomain' (read-only) and 'cancelDomain' (different action), though it doesn't explicitly differentiate from other update tools like 'updateHandle' or 'updateDnsZoneRecord'.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing domain), exclusions (e.g., what cannot be updated), or comparisons to similar tools like 'updateHandle' or 'updateDnsZoneRecord'. The agent must infer usage from the name and description alone.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
updateHandleUpdate specific handleDDestructiveInspect
Update specific handle - Update specific handle
| Name | Required | Description | Default |
|---|---|---|---|
| handleId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| updateHandleBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which tells the agent this is a mutation that may cause irreversible changes. The description adds no behavioral context beyond what annotations provide - no information about permissions needed, rate limits, side effects, or what specifically gets destroyed. However, it doesn't contradict the annotations, so it gets a baseline score for not adding value but not being misleading.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description 'Update specific handle - Update specific handle' is not concise - it's repetitive and wasteful. The repetition adds no value, and the structure provides no useful information. This isn't appropriate conciseness but rather under-specification masquerading as brevity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool (destructiveHint=true) with 4 parameters, 0% schema description coverage, no output schema, and complex nested objects, the description is completely inadequate. It provides no information about what the tool does, when to use it, what parameters mean, what behavior to expect, or what gets returned. This leaves the agent with insufficient information to use the tool correctly.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning none of the parameters have descriptions in the schema. The tool description provides absolutely no information about any parameters - not what handleId represents, not what updateHandleBody contains, not what x-request-id is for. With 4 parameters (3 required) and 0% schema coverage, the description fails completely to compensate for this documentation gap.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update specific handle - Update specific handle' is a tautology that merely restates the name and title without adding any meaningful information about what the tool actually does. It doesn't specify what a 'handle' represents in this context or what aspects are being updated, making it vague and unhelpful for understanding the tool's function.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides absolutely no guidance on when to use this tool versus alternatives. With sibling tools like 'createHandle', 'retrieveHandle', 'removeHandle', and 'setDefaultHandle', there's no indication of when this update operation is appropriate versus creating a new handle or using other handle-related operations.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
updateImageUpdate custom image name by its idCDestructiveInspect
Update custom image name by its id - Update name of the custom image
| Name | Required | Description | Default |
|---|---|---|---|
| imageId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| updateImageBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, showing this is a mutation with destructive potential. The description doesn't contradict these, but adds minimal behavioral context beyond the update action. It lacks details on permissions, side effects, or error handling, though annotations cover the safety profile adequately.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief but inefficiently structured, repeating the same idea. It's front-loaded but wastes words on redundancy ('Update custom image name by its id - Update name of the custom image'). A single clear sentence would suffice.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 4 parameters, 0% schema coverage, and no output schema, the description is inadequate. It doesn't compensate for missing parameter docs, explain return values, or provide usage context, making it incomplete for safe and effective tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so parameters are undocumented in the schema. The description only mentions 'name' and 'id', but there are 4 parameters including x-request-id, x-trace-id, and updateImageBody with nested properties. It fails to explain parameter purposes or formats, leaving significant gaps.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description is tautological, essentially restating the title ('Update custom image name by its id - Update name of the custom image'). It specifies the verb (update) and resource (custom image name), but lacks specificity about what 'update' entails and doesn't differentiate from sibling tools like updateSnapshot or updateTag.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing image), exclusions, or related tools like createCustomImage or deleteImage, leaving the agent without context for selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
updateObjectStorageModifies the display name of object storageCDestructiveInspect
Modifies the display name of object storage - Modifies the display name of object storage. Display name must be unique.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| objectStorageId | Yes | ||
| updateObjectStorageBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, confirming this is a mutation tool with destructive potential. The description adds context by specifying that it modifies the display name and that the name must be unique, which provides useful behavioral details beyond the annotations. No contradiction with annotations is present.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise but repetitive, stating 'Modifies the display name of object storage' twice. It is front-loaded with the main action, but the repetition reduces efficiency, making it less than optimal in structure.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the complexity (4 parameters, nested objects, no output schema, and 0% schema coverage), the description is incomplete. It lacks details on parameters, error conditions, or what the tool returns, failing to provide sufficient context for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage and 4 parameters, the description does not compensate adequately. It only implies the 'displayName' parameter through the mention of modifying the display name, but other parameters like 'x-request-id' and 'objectStorageId' are undocumented, leaving significant gaps in understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool 'Modifies the display name of object storage,' which is a clear verb+resource combination. However, it doesn't distinguish this from sibling tools like 'updateObjectStorage' (which appears to be a duplicate name in the list) or 'upgradeObjectStorage,' leaving the purpose somewhat vague regarding differentiation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives, such as 'createObjectStorage' or 'deleteObjectStorage.' It mentions that 'Display name must be unique,' which is a constraint but not explicit usage advice, so there is minimal guidance.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
updatePtrRecordEdit a PTR Record by ip addressCDestructiveInspect
Edit a PTR Record by ip address - Edit attributes for a specific PTR Record
| Name | Required | Description | Default |
|---|---|---|---|
| ipAddress | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| updatePtrRecordBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a destructive, non-read-only operation. The description confirms it's an edit operation, which aligns with the annotations. However, it doesn't add significant behavioral context beyond what annotations provide, such as permission requirements, side effects, or error conditions.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core action. However, it's slightly repetitive ('Edit a PTR Record by ip address - Edit attributes...'), and the second part adds minimal value, suggesting minor inefficiency.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 4 parameters, 0% schema coverage, and no output schema, the description is inadequate. It lacks details on parameters, expected outcomes, error handling, and how it fits with sibling tools, leaving significant gaps for an AI agent.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries full burden for explaining parameters. It mentions 'ip address' and 'attributes', but doesn't detail what parameters are needed (e.g., x-request-id, updatePtrRecordBody) or their purposes, leaving key inputs undocumented.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool edits a PTR record by IP address, which is a clear verb+resource combination. However, it's somewhat vague ('Edit attributes') and doesn't differentiate from sibling tools like 'updateDnsZoneRecord' or 'createPtrRecord' beyond the basic operation.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. It doesn't mention prerequisites, when not to use it, or how it differs from similar tools like 'createPtrRecord' or 'deletePtrRecord'.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
updateRoleUpdate specific role by idCDestructiveInspect
Update specific role by id - Update attributes to your role. Attributes are optional. If not set, the attributes will retain their original values.
| Name | Required | Description | Default |
|---|---|---|---|
| roleId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| updateRoleBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description aligns with by implying mutation ('Update attributes'). The description adds that 'Attributes are optional. If not set, the attributes will retain their original values,' which is useful behavioral context not covered by annotations. However, it lacks details on permissions needed, side effects, or error conditions, leaving room for improvement.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the main action, but it's somewhat redundant ('Update specific role by id - Update attributes...'). The second sentence adds value by clarifying optional attributes, but overall structure is minimal and could be more polished without sacrificing clarity.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (4 parameters, nested objects, no output schema) and lack of schema descriptions, the description is insufficient. It doesn't explain the input parameters, expected outcomes, or error handling. For a destructive mutation tool with rich input structure, more context is needed to guide effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, meaning no parameters have descriptions in the schema. The description only vaguely mentions 'attributes' without explaining what they are (e.g., name, admin, permissions). It fails to compensate for the schema gap, leaving key parameters like 'roleId', 'updateRoleBody', and 'x-request-id' unexplained in terms of semantics or usage.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'Update specific role by id - Update attributes to your role.' It specifies the verb (update), resource (role), and identifier (by id). However, it doesn't explicitly differentiate from sibling tools like 'createRole' or 'deleteRole' beyond the update action, which is why it doesn't reach a 5.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing role ID), compare it to 'createRole' or 'deleteRole', or specify any exclusions. The only usage hint is that attributes are optional, but this is more about parameter behavior than tool selection.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
updateSecretUpdate specific secret by idCDestructiveInspect
Update specific secret by id - Update attributes to your secret. Attributes are optional. If not set, the attributes will retain their original values. Only name and value can be updated.
| Name | Required | Description | Default |
|---|---|---|---|
| secretId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| updateSecretBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a destructive, non-read-only operation, which the description aligns with by implying mutation ('Update attributes'). The description adds that only 'name and value' can be updated, which provides useful constraints beyond annotations. However, it lacks details on permissions, side effects, or error handling that would enhance transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise and front-loaded with the main action. Both sentences add value: the first states the purpose, and the second clarifies update constraints and optionality. There's no redundant or wasted wording.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 4 parameters (0% schema coverage) and no output schema, the description is insufficient. It lacks details on required permissions, what happens on success/failure, response format, or error conditions. The annotations help but don't fully compensate for the missing contextual information.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate. It mentions that 'name and value can be updated' and that attributes are optional, which partially explains the 'updateSecretBody' parameter. However, it doesn't address other parameters like 'secretId', 'x-request-id', or 'x-trace-id', leaving significant gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the action ('Update specific secret by id') and specifies what can be updated ('Only name and value can be updated'), which distinguishes it from generic update operations. However, it doesn't explicitly differentiate from sibling tools like 'createSecret' or 'deleteSecret' beyond the 'by id' qualifier.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives (e.g., 'createSecret' for new secrets or 'deleteSecret' for removal). It mentions that attributes are optional and retain original values if not set, but this is more about parameter behavior than usage context.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
updateSnapshotUpdate specific snapshot by idADestructiveInspect
Update specific snapshot by id - Update attributes of a snapshot. You may only specify the attributes you want to change. If an attribute is not set, it will retain its original value.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| snapshotId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| updateSnapshotBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description aligns with by describing an update operation. The description adds valuable behavioral context: the partial update behavior ('only specify the attributes you want to change') and idempotent-like retention of original values for unspecified attributes. This goes beyond what annotations provide.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
Two concise sentences with zero waste. The first states the purpose, the second explains the partial update behavior. Every word contributes to understanding the tool's functionality.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with no output schema, the description covers the core update behavior well but lacks details on error conditions, response format, or side effects. Given the annotations cover safety profile, it's adequate but could be more complete about what 'destructive' entails specifically for snapshots.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage, the description doesn't explain any of the 5 parameters. It mentions updating 'attributes' generally but doesn't specify which attributes (name, description) or the required IDs. The schema fully documents parameters, so baseline 3 applies, but the description adds minimal semantic value.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Update') and resource ('attributes of a snapshot'), specifying it's for a specific snapshot by ID. It distinguishes from createSnapshot by focusing on updates rather than creation, but doesn't explicitly differentiate from other update tools like updateImage or updateUser.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage when you need to modify snapshot attributes, mentioning 'You may only specify the attributes you want to change.' However, it doesn't provide explicit guidance on when to use this vs. alternatives like deleteSnapshot or rollbackSnapshot, nor does it mention prerequisites or error conditions.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
updateTagUpdate specific tag by idBDestructiveInspect
Update specific tag by id - Update attributes to your tag. Attributes are optional. If not set, the attributes will retain their original values.
| Name | Required | Description | Default |
|---|---|---|---|
| tagId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| updateTagBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description aligns with by describing an update operation. The description adds that attributes are optional and retain original values if not set, providing useful behavioral context beyond annotations. However, it lacks details on permissions, error conditions, or rate limits, which would enhance transparency for a destructive operation.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise with two sentences that are front-loaded: the first states the action, and the second clarifies attribute behavior. There's no wasted text, but it could be slightly more structured by explicitly listing key parameters or usage scenarios.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive update with nested objects, no output schema), the description is minimally adequate. It covers the update action and optional attributes but lacks details on response format, error handling, or specific use cases. With annotations providing safety hints, it meets a baseline but doesn't fully compensate for the missing output schema and low parameter coverage.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description carries the burden of explaining parameters. It mentions 'attributes are optional' and that they 'retain their original values if not set', which adds semantic value for the updateTagBody object. However, it doesn't detail specific attributes (name, color, description) or the required tagId and x-request-id parameters, leaving gaps in parameter understanding.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Update') and resource ('specific tag by id'), making the purpose understandable. It distinguishes from sibling tools like 'createTag' and 'deleteTag' by focusing on modification rather than creation or deletion. However, it doesn't explicitly differentiate from other update tools like 'updateUser' or 'updateRole' beyond the resource type.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an existing tag ID), when not to use it (e.g., for bulk updates), or refer to sibling tools like 'createTag' for new tags or 'deleteTag' for removal. Usage is implied through the action but not explicitly contextualized.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
updateUserUpdate specific user by idBDestructiveInspect
Update specific user by id - Update attributes of a user. You may only specify the attributes you want to change. If an attribute is not set, it will retain its original value.
| Name | Required | Description | Default |
|---|---|---|---|
| userId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| updateUserBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate destructiveHint=true and readOnlyHint=false, which the description aligns with by implying mutation ('Update attributes'). It adds useful context about partial updates ('specify only attributes you want to change'), but doesn't detail side effects like authentication needs, rate limits, or what happens on failure.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is brief and front-loaded with the core action. Both sentences are relevant, with the second explaining the partial update mechanism, making it efficient without wasted words.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 4 parameters, 0% schema coverage, and no output schema, the description is insufficient. It lacks details on required parameters, error handling, return values, and specific use cases, leaving significant gaps for an AI agent to operate effectively.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate, but it only generically mentions 'attributes' without specifying which ones or their meanings. It clarifies partial update behavior, which helps, but doesn't add enough semantic detail about the 4 parameters beyond what the schema's structure implies.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the verb ('Update') and resource ('specific user by id'), making the purpose evident. However, it doesn't explicitly differentiate from sibling tools like 'updateRole' or 'updateTag', which follow similar patterns but target different resources.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. For example, it doesn't mention prerequisites like needing the user's ID or compare it to 'createUser' for new users or 'deleteUser' for removal, leaving usage context unclear.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upgradeInstanceUpgrading instance capabilitiesBDestructiveInspect
Upgrading instance capabilities - In order to enhance your instance with additional features you can purchase add-ons. Currently only firewalling and private network addon is allowed.
| Name | Required | Description | Default |
|---|---|---|---|
| instanceId | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes | ||
| upgradeInstanceBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate this is a destructive (destructiveHint: true) and non-read-only (readOnlyHint: false) operation. The description adds value by mentioning that upgrades involve purchasing add-ons, which implies potential costs or billing implications. However, it doesn't disclose important behavioral details like whether the upgrade is reversible, if it causes downtime, or what authentication/rate limits apply. No contradiction with annotations exists.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is relatively concise (two sentences) but has structural issues. The first sentence is redundant with the title ('Upgrading instance capabilities'), and the second sentence contains awkward phrasing ('Currently only...is allowed'). While it avoids excessive verbosity, it doesn't efficiently convey critical information.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 4 parameters (3 required), 0% schema coverage, no output schema, and nested objects, the description is inadequate. It doesn't explain what happens after upgrade, error conditions, response format, or provide sufficient parameter guidance. The annotations help with safety profile, but the description leaves too many gaps for effective tool use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
With 0% schema description coverage for the 4 parameters, the description carries full burden but provides almost no parameter information. It mentions 'firewalling and private network addon' which loosely relates to the 'upgradeInstanceBody' properties, but doesn't explain the required 'instanceId', 'x-request-id', or how to structure the upgrade request. The description fails to compensate for the schema's lack of parameter documentation.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description clearly states the tool's purpose: 'enhance your instance with additional features' through 'purchase add-ons', specifically mentioning 'firewalling and private network addon'. It distinguishes itself from siblings like 'patchInstance' or 'upgradeObjectStorage' by focusing on capability upgrades rather than configuration changes or other resource types. However, it doesn't explicitly name the verb 'upgrade' which is in the title.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description implies usage context ('to enhance your instance with additional features') and mentions specific allowed add-ons, but doesn't provide explicit guidance on when to use this versus alternatives like 'assignInstancePrivateNetwork' or 'createPrivateNetwork'. No prerequisites, exclusions, or comparison to sibling tools are stated.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
upgradeObjectStorageUpgrade object storage size resp. update autoscaling settings.CDestructiveInspect
Upgrade object storage size resp. update autoscaling settings. - Upgrade object storage size. You can also adjust the autoscaling settings for your object storage. Autoscaling allows you to automatically purchase storage capacity on a monthly basis up to the specified limit.
| Name | Required | Description | Default |
|---|---|---|---|
| x-trace-id | No | ||
| x-request-id | Yes | ||
| objectStorageId | Yes | ||
| upgradeObjectStorageBody | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate readOnlyHint=false and destructiveHint=true, which the description aligns with by implying a write operation ('upgrade', 'update'). The description adds value by explaining autoscaling behavior and billing implications ('you will also be billed for the added storage space'), which annotations don't cover, though it lacks details on permissions or rate limits.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is moderately concise with two sentences, but it's repetitive ('Upgrade object storage size' appears twice) and could be more front-loaded. It avoids excessive verbosity but doesn't maximize efficiency, with some redundancy that doesn't add value.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
For a destructive mutation tool with 4 parameters, 0% schema coverage, and no output schema, the description is incomplete. It covers basic purpose and some behavioral context (billing, autoscaling) but misses parameter details, error handling, and output expectations, making it adequate but with clear gaps.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for undocumented parameters. It mentions 'object storage size' and 'autoscaling settings' but doesn't explain the four parameters (e.g., x-request-id, objectStorageId) or their semantics, leaving significant gaps beyond what the schema provides.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool upgrades object storage size and updates autoscaling settings, providing a clear verb+resource combination. However, it doesn't differentiate from sibling tools like 'updateObjectStorage' or 'CancelObjectStorage'—the purpose is clear but lacks sibling distinction.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No explicit guidance on when to use this tool versus alternatives like 'updateObjectStorage' or 'CancelObjectStorage' is provided. The description mentions autoscaling but doesn't specify prerequisites, exclusions, or when-not-to-use scenarios, leaving usage context implied at best.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
validateDomainAvailabilityCheck domain availablilityCDestructiveInspect
Check domain availablility - Check if a specific domain is available or not
| Name | Required | Description | Default |
|---|---|---|---|
| domain | Yes | ||
| x-trace-id | No | ||
| x-request-id | Yes |
Tool Definition Quality
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
Annotations indicate 'readOnlyHint: false' and 'destructiveHint: true', but the description does not explain this behavior. It claims to 'check' availability, which sounds read-only, contradicting the destructive hint. No additional context is provided on side effects, rate limits, or authentication needs, leaving a gap in understanding the tool's impact.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is concise but under-specified, consisting of a repetitive sentence. It front-loads the core function but wastes words restating the title. While not verbose, it lacks necessary detail, making brevity a drawback rather than a strength.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (destructive hint, no output schema, 0% schema coverage), the description is incomplete. It fails to address behavioral contradictions, parameter meanings, or usage context. Without annotations explaining the destructive nature or output details, the description leaves critical gaps for effective tool invocation.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
Schema description coverage is 0%, so the description must compensate for three parameters. It only mentions 'domain' generically, ignoring 'x-request-id' and 'x-trace-id'. No details on parameter formats, purposes, or constraints are given, failing to add meaningful semantics beyond the bare schema.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description states the tool's purpose ('Check if a specific domain is available or not'), which is clear but vague. It repeats the title and name without specifying what 'available' means in context (e.g., for registration, purchase, or DNS). It does not distinguish from siblings like 'orderDomain' or 'listDomains', leaving ambiguity about its unique role.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. Siblings include 'orderDomain' (for purchasing), 'listDomains' (for listing owned domains), and 'retrieveDomain' (for details), but the description offers no context on prerequisites, timing, or comparisons. This lack of guidance may lead to misuse.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
Claim this connector by publishing a /.well-known/glama.json file on your server's domain with the following structure:
{
"$schema": "https://glama.ai/mcp/schemas/connector.json",
"maintainers": [{ "email": "your-email@example.com" }]
}The email address must match the email associated with your Glama account. Once published, Glama will automatically detect and verify the file within a few minutes.
Control your server's listing on Glama, including description and metadata
Access analytics and receive server usage reports
Get monitoring and health status updates for your server
Feature your server to boost visibility and reach more users
For users:
Full audit trail – every tool call is logged with inputs and outputs for compliance and debugging
Granular tool control – enable or disable individual tools per connector to limit what your AI agents can do
Centralized credential management – store and rotate API keys and OAuth tokens in one place
Change alerts – get notified when a connector changes its schema, adds or removes tools, or updates tool definitions, so nothing breaks silently
For server owners:
Proven adoption – public usage metrics on your listing show real-world traction and build trust with prospective users
Tool-level analytics – see which tools are being used most, helping you prioritize development and documentation
Direct user feedback – users can report issues and suggest improvements through the listing, giving you a channel you would not have otherwise
The connector status is unhealthy when Glama is unable to successfully connect to the server. This can happen for several reasons:
The server is experiencing an outage
The URL of the server is wrong
Credentials required to access the server are missing or invalid
If you are the owner of this MCP connector and would like to make modifications to the listing, including providing test credentials for accessing the server, please contact support@glama.ai.
Discussions
No comments yet. Be the first to start the discussion!