Skip to main content
Glama

Server Details

The Google Compute Engine MCP server is a fully-managed Model Context Protocol server that provides tools to manage Google Compute Engine resources through AI agents. It enables capabilities including instance management (creating, starting, stopping, resetting, listing), disk management, handling instance templates and group managers, viewing machine and accelerator types, managing images, and accessing reservation and commitment information. The server operates as a zero-deployment, enterprise-grade endpoint at https://compute.googleapis.com/mcp with built-in IAM-based security.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.

Tool Definition Quality

Score is being calculated. Check back soon.

Available Tools

29 tools
create_instanceA
Destructive
Inspect

Create a new Google Compute Engine virtual machine (VM) instance. Requires project, zone, and instance name as input. If machine_type is not provided, it defaults to e2-medium. If image_project and image_family are not provided, it defaults to debian-12 image from debian-cloud project. guest_accelerator and maintenance_policy can be optionally provided. Proceed only if there is no error in response and the status of the operation is DONE without any errors. To get details of the operation, use the get_zone_operation tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The instance name.
zoneYesRequired. The zone of the instance.
projectYesRequired. Project ID for this request.
imageFamilyNoOptional. The image family of the instance.
machineTypeNoOptional. The machine type of the instance.
imageProjectNoOptional. The image project of the instance.
guestAcceleratorsNoOptional. The list of attached accelerators. Each entry specifies the accelerator type (short name or full/partial URL, e.g., 'nvidia-tesla-p4') and the count.
maintenancePolicyNoOptional. The maintenance policy option for the instance.

Output Schema

ParametersJSON Schema
NameRequiredDescription
operationNameNoThe operation name of the instance creation.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructiveHint=true and readOnlyHint=false. The description adds critical behavioral context: (1) default values (e2-medium, debian-12), (2) asynchronous operation pattern requiring status polling via get_zone_operation, and (3) optional vs required parameter clarification. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is information-dense with no filler. Every sentence serves a purpose: stating the action, required inputs, default behaviors, optional parameters, and operational workflow. The async polling instruction is slightly verbose but necessary for correct API usage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given this is a complex infrastructure mutation tool with async behavior, the description adequately covers the creation flow, default configurations, and the critical polling mechanism. Since an output schema exists (per context signals), the description appropriately avoids duplicating return value structures while still explaining the success condition.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description adds significant value by documenting default values for machine_type, image_project, and image_family, plus clarifying which parameters are required vs optional. This semantic context helps the agent understand parameter behavior beyond the schema's type definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description explicitly states 'Create a new Google Compute Engine virtual machine (VM) instance' — specific verb (Create) + specific resource (GCE VM instance). It clearly distinguishes from sibling tools like delete_instance, start_instance, or list_instances by focusing on instance creation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly references sibling tool get_zone_operation for polling operation status ('To get details of the operation, use the get_zone_operation tool'). It also provides procedural guidance ('Proceed only if... status is DONE'). Missing explicit 'when not to use' (e.g., vs instance templates), but the async operation guidance is valuable.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

delete_instanceA
Destructive
Inspect

Delete a Google Compute Engine virtual machine (VM) instance. Requires project, zone, and instance name as input. Proceed only if there is no error in response and the status of the operation is DONE without any errors. To get details of the operation, use the get_zone_operation tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The instance name.
zoneYesRequired. The zone of the instance.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
operationNameNoThe operation name of the instance deletion.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Beyond annotations marking this as destructive, the description discloses critical behavioral traits: the operation is asynchronous (requiring status checking via DONE state) and explains the error handling pattern ('Proceed only if...'). This contextualizes the workflow beyond the simple 'destructive' flag.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core action, followed by requirements, then operational guidance. Four sentences efficiently pack purpose, prerequisites, success conditions, and related tool references without redundancy. Slightly dense but no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and clear destructive annotations, the description appropriately focuses on operational workflow (async status checking) rather than return values. It covers the critical safety aspects for a destructive operation, though mentioning idempotency or force-delete options would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the schema fully documents the three parameters (name, zone, project). The description restates these requirements ('Requires project, zone, and instance name') but adds no semantic depth (e.g., format constraints, relationship between zone and project) beyond the structured schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action (Delete) and resource (Google Compute Engine VM instance). While the verb and resource are precise, it does not explicitly differentiate from siblings like stop_instance (which halts but preserves the VM), which would help an agent understand this is permanent destruction.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit prerequisites ('Requires project, zone, and instance name') and references a specific sibling tool for follow-up actions ('use the get_zone_operation tool'). It also gives clear success criteria ('status of the operation is DONE'). However, it lacks explicit guidance on when NOT to use this (e.g., versus stopping an instance) or prerequisites like instance state.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_commitment_basic_infoA
Read-onlyIdempotent
Inspect

Get basic information about a Compute Engine Commitment, including its name, ID, status, plan, type, resources, and creation, start and end timestamps. Requires project, region, and commitment name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. Name of the commitment to return.
regionYesRequired. The region of the commitment.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
idNoThe unique identifier for the commitment.
nameNoName of the commitment.
planNoThe plan of the commitment.
typeNoThe type of the commitment.
statusNoThe status of the commitment.
endTimeNoEnd timestamp of the commitment.
resourcesNoA list of all the hardware resources of the commitment.
startTimeNoStart timestamp of the commitment.
createTimeNoCreation timestamp of the commitment.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true, establishing safe read behavior. The description adds valuable context by enumerating the specific data fields returned (status, plan, timestamps, etc.), which helps the agent understand the data payload beyond what the annotations indicate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two well-structured sentences: the first front-loaded with the action and resource, the second stating prerequisites. Every word earns its place with no redundancy or tautology.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (which carries the burden of documenting return structure) and comprehensive annotations (readOnly, idempotent), the description provides adequate completeness for a straightforward read operation. Minor gap: no mention of 'not found' error behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% description coverage with clear semantic definitions for all three parameters (project, region, name). The description acknowledges these requirements but adds no additional semantic depth (e.g., format constraints or validation rules) beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb 'Get', the resource 'Compute Engine Commitment', and enumerates the exact fields returned (name, ID, status, plan, type, resources, timestamps). It effectively distinguishes from sibling `list_commitments` by implying a singular retrieval operation requiring specific identifiers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states the required inputs (project, region, commitment name), which provides implicit context for invocation. However, it lacks explicit guidance on when to choose this tool over `list_commitments` (i.e., 'use when you know the specific commitment name' vs 'use when listing all commitments').

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_disk_basic_infoA
Read-onlyIdempotent
Inspect

Get basic information about a Compute Engine disk, including its name, ID, description, creation timestamp, size, type, status, last attach timestamp, and last detach timestamp. Requires project, zone, and disk name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The disk name.
zoneYesRequired. The zone of the disk.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
idNoThe unique identifier for the disk.
nameNoName of the disk.
typeNoURL of the disk type resource.
sizeGbNoSize of the disk in GB.
statusNoThe status of the disk.
createTimeNoCreation timestamp of the disk.
descriptionNoDescription of the disk.
lastAttachTimestampNoLast attach timestamp of the disk.
lastDetachTimestampNoLast detach timestamp of the disk.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnlyHint=true and idempotentHint=true. Description adds valuable behavioral context by enumerating exactly which fields are returned (creation timestamp, last attach/detach timestamps, size, type), clarifying what 'basic info' means without the agent needing to execute the tool. Consistent with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficiently structured sentences. First sentence front-loads the action and enumerates return values. Second sentence lists required inputs. No redundant filler words; every element serves to clarify tool function or requirements.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having an output schema (which reduces burden on description), the description commendably enumerates specific return fields, helping agents validate this is the correct tool before invocation. Adequately covers the three required parameters for a straightforward read operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for project, zone, and name. Description states 'Requires project, zone, and disk name as input' which maps to the required parameters, but adds minimal semantic meaning beyond what the schema already provides. Appropriate baseline score for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent clarity with specific verb 'Get', resource 'Compute Engine disk', and explicit enumeration of returned fields (name, ID, description, timestamps, size, type, status). Clearly distinguishes from sibling 'get_disk_performance_config' by specifying 'basic information' vs performance metrics, and from 'list_disks' by targeting a specific disk via required identifiers.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage context by specifying 'basic information' and listing specific fields, which helps distinguish from 'get_disk_performance_config'. However, lacks explicit guidance on when to use this versus 'list_disks' or whether to use this before/after attaching to an instance. No explicit alternatives or prerequisites mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_disk_performance_configA
Read-onlyIdempotent
Inspect

Get performance configuration of a Compute Engine disk, including its type, size, provisioned IOPS, provisioned throughput, physical block size, storage pool and access mode. Requires project, zone, and disk name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The disk name.
zoneYesRequired. The zone of the disk.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
typeNoURL of the disk type resource.
sizeGbNoSize of the disk in GB.
accessModeNoThe access mode of the disk.
storagePoolNoThe storage pool of the disk.
provisionedIopsNoIndicates how many IOPS to provision for the disk. This sets the number of I/O operations per second that the disk can handle.
provisionedThroughputNoIndicates how much throughput to provision for the disk. This sets the number of throughput mb per second that the disk can handle.
physicalBlockSizeBytesNoPhysical block size of the persistent disk, in bytes.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds valuable context by enumerating specific configuration fields returned (type, size, provisioned IOPS, throughput, storage pool, access mode), clarifying what 'performance configuration' entails.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence front-loads the action and specific resource attributes; second states prerequisites. Every word earns its place with no redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of output schema and strong annotations, the description appropriately focuses on input requirements. It proactively lists specific output fields (provisioned IOPS, physical block size, etc.), providing completeness without needing to duplicate the output schema structure.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100% with all three parameters (project, zone, name) fully documented. The description reinforces that these are required inputs but adds no additional syntax, format, or semantic details beyond what the schema provides, warranting baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with clear resource 'performance configuration of a Compute Engine disk'. It distinguishes from sibling get_disk_basic_info by specifying performance-specific attributes (provisioned IOPS, throughput, physical block size) rather than basic metadata.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description states prerequisites ('Requires project, zone, and disk name'), implying required inputs. However, it lacks explicit guidance on when to use this versus sibling get_disk_basic_info, relying only on implicit distinction through the listed performance attributes.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_instance_basic_infoA
Read-onlyIdempotent
Inspect

Get basic information about a Compute Engine VM instance, including its name, ID, status, machine type, creation timestamp, and attached guest accelerators. Requires project, zone, and instance name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The instance name.
zoneYesRequired. The zone of the instance.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
idNoThe unique identifier for the instance.
nameNoName of the instance.
statusNoThe status of the instance.
createTimeNoCreation timestamp of the instance.
machineTypeNoThe machine type of the instance.
guestAcceleratorsNoAccelerators attached to the instance.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering the safety profile. The description adds return field enumeration (guest accelerators, timestamps, etc.), which provides useful content context. However, it omits other behavioral details like rate limits, required IAM permissions, or error conditions (e.g., instance not found behavior).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence front-loads purpose and return value details; second sentence states prerequisites. Every clause earns its place. Appropriate length for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema (covering return values) and comprehensive annotations (covering safety), the description provides sufficient context: it identifies the GCP service (Compute Engine), specifies returned data fields, and clarifies required inputs. Adequately complete for a standard resource getter with no hidden side effects.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all three parameters. The description maps the 'name' parameter to 'instance name' and confirms the required nature of inputs, but adds minimal semantic detail beyond what the schema's 'Required' labels and property descriptions already provide. Baseline 3 is appropriate for high-coverage schemas.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Get' with clear resource 'Compute Engine VM instance'. Lists exact fields returned (name, ID, status, machine type, creation timestamp, attached guest accelerators), providing high specificity. Distinguishes from disk/template siblings by explicitly naming the VM resource type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context by specifying the target resource (Compute Engine VM) and explicitly stating required inputs (project, zone, instance name), which implies prerequisites for use. While it doesn't explicitly name alternatives like 'use list_instances for enumeration,' the singular 'instance' and required identifier pattern clearly signals this is for specific instance retrieval versus listing operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_instance_group_manager_basic_infoA
Read-onlyIdempotent
Inspect

Get basic information about a Compute Engine managed instance group (MIG), including its name, ID, instance template, base instance name, target size, target stopped size, target suspended size, status and creation timestamp. Requires project, zone, and MIG name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The instance group manager name.
zoneYesRequired. The zone of the instance group manager.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
idNoThe unique identifier for the instance group manager.
nameNoName of the instance group manager.
statusNoThe status of the instance group manager.
createTimeNoCreation timestamp of the instance group manager.
targetSizeNoThe target size of the instance group manager.
baseInstanceNameNoThe base instance name of the instance group manager.
instanceTemplateNoThe instance template of the instance group manager.
targetStoppedSizeNoThe target stopped size of the instance group manager.
targetSuspendedSizeNoThe target suspended size of the instance group manager.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover read-only and idempotent safety properties; the description aligns by using 'Get' and adds value by enumerating specific return fields. It does not address permissions, rate limits, or error behaviors, but this is acceptable given the rich annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences with the primary action front-loaded. The enumerated list of return fields is long but informative and relevant to the tool's purpose, with no filler text.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given complete input schema coverage, existing output schema, and comprehensive annotations, the description adequately covers the resource type, required inputs, and expected data fields without needing to elaborate on return structure or complex behaviors.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description maps the 'name' parameter to 'MIG name' for clarity, but otherwise repeats the schema's 'Required' indicators without adding supplementary semantics like input format constraints or validation rules.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly identifies the specific resource (Compute Engine managed instance group/MIG) and action (Get basic information), listing distinct return fields like target stopped size and target suspended size that differentiate it from siblings such as get_instance_basic_info (which targets VMs, not MIGs).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description specifies input prerequisites (requires project, zone, and MIG name), which provides basic usage constraints. However, it lacks explicit guidance on when to use this tool versus list_instance_group_managers or other siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_instance_template_basic_infoA
Read-onlyIdempotent
Inspect

Get basic information about a Compute Engine instance template, including its name, ID, description, machine type, region, and creation timestamp. Requires project and instance template name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. Name of the instance template to return.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
idNoThe unique identifier for the instance template.
nameNoName of the instance template.
regionNoThe region of the instance template if it is a regional resource.
createTimeNoCreation timestamp of the instance template.
descriptionNoDescription of the instance template.
machineTypeNoThe machine type of the instance template.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations declare readOnlyHint/idempotentHint, so the description's burden is lower. It adds valuable behavioral context by enumerating the specific fields returned (name, ID, description, machine type, region, creation timestamp), which helps the agent predict the output structure beyond what the annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences: the first defines the operation and enumerates returned fields; the second states input requirements. No redundancy or filler content. Every clause earns its place by conveying distinct operational information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and 100% input schema coverage, the description appropriately focuses on high-level purpose and input requirements. It enhances completeness by listing the specific basic fields returned, though it could further clarify the boundary between 'basic_info' and 'properties' given the sibling tool existence.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with both 'project' and 'name' well-documented in the schema (including x-google-identifier). The description merely reiterates that these are required inputs without adding semantic nuance (e.g., name format expectations, project validation rules) beyond the schema definitions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Get') and resource ('Compute Engine instance template'), and lists specific fields returned (name, ID, machine type, etc.), which implicitly distinguishes it from the sibling 'get_instance_template_properties'. However, it does not explicitly name the sibling tool to clarify when to use which variant.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description clearly states what inputs are required ('Requires project and instance template name'), but provides no explicit guidance on when to use this tool versus the sibling 'get_instance_template_properties' or other alternatives. The 'basic information' phrasing implies limited scope, but exclusion criteria are not stated.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_instance_template_propertiesA
Read-onlyIdempotent
Inspect

Get instance properties of a Compute Engine instance template. This includes properties such as description, tags, machine type, network interfaces, disks, metadata, service accounts, scheduling options, labels, guest accelerators, reservation affinity, and shielded/confidential instance configurations. Requires project and instance template name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. Name of the instance template to return.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
tagsNoA list of tags to apply to the instances that are created from these properties. The tags identify valid sources or targets for network firewalls. The setTags method can modify this list of tags. Each tag within the list must comply with RFC1035 <https://www.ietf.org/rfc/rfc1035.txt>.
disksNoAn array of disks that are associated with the instances that are created from these properties.
labelsNoLabels to apply to instances that are created from these properties.
metadataNoThe metadata key/value pairs to assign to instances that are created from these properties. These pairs can consist of custom metadata or predefined keys. See Project and instance metadata </compute/docs/metadata#project_and_instance_metadata> for more information.
schedulingNoSpecifies the scheduling options for the instances that are created from these properties.
descriptionNoAn optional text description for the instances that are created from these properties.
machineTypeNoThe machine type to use for instances that are created from these properties. This field only accepts a machine type name, for example `n2-standard-4`. If you use the machine type full or partial URL, for example `projects/my-l7ilb-project/zones/us-central1-a/machineTypes/n2-standard-4`, the request will result in an `INTERNAL_ERROR`.
canIpForwardNoEnables instances created based on these properties to send packets with source IP addresses other than their own and receive packets with destination IP addresses other than their own. If these instances will be used as an IP gateway or it will be set as the next-hop in a Route resource, specify true. If unsure, leave this set to false. See the Enable IP forwarding </vpc/docs/using-routes#canipforward> documentation for more information.
minCpuPlatformNoMinimum cpu/platform to be used by instances. The instance may be scheduled on the specified or newer cpu/platform. Applicable values are the friendly names of CPU platforms, such as minCpuPlatform: "Intel Haswell" or minCpuPlatform: "Intel Sandy Bridge". For more information, read Specifying a Minimum CPU Platform </compute/docs/instances/specify-min-cpu-platform>.
serviceAccountsNoA list of service accounts with specified scopes. Access tokens for these service accounts are available to the instances that are created from these properties. Use metadata queries to obtain the access tokens for these instances.
resourcePoliciesNoResource policies (names, not URLs) applied to instances created from these properties. Note that for MachineImage, this is not supported yet.
guestAcceleratorsNoA list of guest accelerator cards' type and count to use for instances created from these properties.
networkInterfacesNoAn array of network access configurations for this interface.
reservationAffinityNoSpecifies the reservations that instances can consume from. Note that for MachineImage, this is not supported yet.
resourceManagerTagsNoInput only. Resource manager tags to be bound to the instance. Tag keys and values have the same definition as resource manager tags <https://cloud.google.com/resource-manager/docs/tags/tags-overview>. Keys must be in the format `tagKeys/{tag_key_id}`, and values are in the format `tagValues/456`. The field is ignored (both PUT & PATCH) when empty.
shieldedInstanceConfigNoNote that for MachineImage, this is not supported yet.
workloadIdentityConfigNo
advancedMachineFeaturesNoControls for advanced machine-related behavior features. Note that for MachineImage, this is not supported yet.
keyRevocationActionTypeNoKeyRevocationActionType of the instance. Supported options are "STOP" and "NONE". The default value is "NONE" if it is not specified.
privateIpv6GoogleAccessNoThe private IPv6 google access type for VMs. If not specified, use INHERIT_FROM_SUBNETWORK as default. Note that for MachineImage, this is not supported yet.
networkPerformanceConfigNoNote that for MachineImage, this is not supported yet.
confidentialInstanceConfigNoSpecifies the Confidential Instance options. Note that for MachineImage, this is not supported yet.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent/destructive hints, so the safety profile is covered. The description adds valuable behavioral context by listing exactly which configuration fields are retrieved (guest accelerators, reservation affinity, shielded configs, etc.), helping the agent understand the richness of the response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences. The long property list is justified given the need to differentiate from the 'basic_info' sibling, though it could potentially be summarized as 'comprehensive configuration details' with the specifics left to the output schema.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Fully complete for a read-only tool with 2 parameters, 100% schema coverage, existing output schema, and comprehensive annotations. The description adequately covers what distinguishes this from siblings without needing to document return values (covered by output schema).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage (both 'project' and 'name' fully documented), establishing baseline 3. The description adds minor value by clarifying 'instance template name' to distinguish from regular instance names, but largely repeats schema information.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses specific verb 'Get' with clear resource 'instance properties of a Compute Engine instance template'. It effectively distinguishes from sibling 'get_instance_template_basic_info' by enumerating the comprehensive list of specific properties returned (machine type, network interfaces, disks, metadata, etc.), signaling this is the detailed variant.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The extensive property list provides clear implicit guidance on when to use this tool (when detailed configuration is needed) versus the 'basic_info' sibling. However, it lacks explicit 'when-not' guidance or direct reference to alternatives by name.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_reservation_basic_infoA
Read-onlyIdempotent
Inspect

Get Compute Engine reservation basic info including name, ID, creation timestamp, zone, status, specific reservation required, commitment, and linked commitments. Requires project, zone, and reservation name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. Name of the reservation to retrieve.
zoneYesRequired. The zone of the reservation.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
idNoThe unique identifier for the reservation.
nameNoName of the reservation.
zoneNoThe zone of the reservation.
statusNoThe status of the reservation.
commitmentNoThe commitment this reservation is tied to.
createTimeNoCreation timestamp of the reservation.
linkedCommitmentsNoThe commitments linked to this reservation.
specificReservationRequiredNoIndicates whether the reservation can be consumed by VMs with affinity for "any" reservation. If the field is set, then only VMs that target the reservation by name can consume from this reservation.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly=true, destructive=false). Description adds value by defining 'basic info' scope through field enumeration, but omits behavioral details like error handling for non-existent reservations or API rate limit characteristics.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with zero redundancy. Front-loaded with verb and resource, followed by field list, then input requirements. Every word serves the definition.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately scoped for a retrieval tool with existing output schema. Field enumeration compensates for not describing return structure. Would benefit from explicit contrast with 'get_reservation_details' sibling to clarify selection criteria.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage with clear type definitions. Description restates the three required parameters but adds no semantic depth (e.g., project ID format, zone naming conventions, name validation rules). Baseline 3 appropriate when schema is self-documenting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity: states 'Get Compute Engine reservation basic info' with concrete fields listed (name, ID, creation timestamp, zone, status, etc.). The term 'basic info' distinguishes it from sibling 'get_reservation_details' (which presumably returns more comprehensive configuration data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implicit guidance by enumerating the limited fields returned, suggesting use when lightweight reservation metadata suffices. However, lacks explicit 'when to use vs get_reservation_details' guidance or prerequisites like required IAM permissions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_reservation_detailsA
Read-onlyIdempotent
Inspect

Get Compute Engine reservation details. Returns reservation details including name, ID, status, creation timestamp, specific reservation properties like machine type, guest accelerators and local SSDs, aggregate reservation properties like VM family and reserved resources, commitment and linked commitments, sharing settings, and resource status. Requires project, zone, and reservation name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. Name of the reservation to retrieve.
zoneYesRequired. The zone of the reservation.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
idNoOutput only. [Output Only] The unique identifier for the resource. This identifier is defined by the server.
kindNoOutput only. [Output Only] Type of the resource. Always compute#reservations for reservations.
nameNoThe name of the resource, provided by the client when initially creating the resource. The resource name must be 1-63 characters long, and comply with RFC1035 <https://www.ietf.org/rfc/rfc1035.txt>. Specifically, the name must be 1-63 characters long and match the regular expression `[a-z]([-a-z0-9]*[a-z0-9])?` which means the first character must be a lowercase letter, and all following characters must be a dash, lowercase letter, or digit, except the last character, which cannot be a dash.
zoneNoZone in which the reservation resides. A zone must be provided if the reservation is created within a commitment.
paramsNoInput only. Additional params passed with the request, but not persisted as part of resource payload.
statusNoOutput only. [Output Only] The status of the reservation. - CREATING: Reservation resources are being allocated. - READY: Reservation resources have been allocated, and the reservation is ready for use. - DELETING: Reservation deletion is in progress. - UPDATING: Reservation update is in progress.
selfLinkNoOutput only. [Output Only] Server-defined fully-qualified URL for this resource.
commitmentNoOutput only. [Output Only] Full or partial URL to a parent commitment. This field displays for reservations that are tied to a commitment.
descriptionNoAn optional description of this resource. Provide this property when you create the resource.
deleteAtTimeNoAbsolute time in future when the reservation will be auto-deleted by Compute Engine. Timestamp is represented in RFC3339 <https://www.ietf.org/rfc/rfc3339.txt> text format.
satisfiesPzsNoOutput only. [Output Only] Reserved for future use.
shareSettingsNoSpecify share-settings to create a shared reservation. This property is optional. For more information about the syntax and options for this field and its subfields, see the guide for creating a shared reservation. <https://cloud.google.com/compute/docs/instances/reservations-shared#creating_a_shared_reservation>
deploymentTypeNoSpecifies the deployment strategy for this reservation.
protectionTierNoProtection tier for the workload which specifies the workload expectations in the event of infrastructure failures at data center (e.g. power and/or cooling failures).
resourceStatusNoOutput only. [Output Only] Status information for Reservation resource.
schedulingTypeNoThe type of maintenance for the reservation.
resourceMetadataNoOutput only. [Output Only] Contains standard resource metadata for an Allocation resource. It is populated for each instance of the Allocation resource, and includes the api_version the instance was retrieved through, and its canonical resource_type name.
resourcePoliciesNoResource policies to be added to this reservation. The key is defined by user, and the value is resource policy url. This is to define placement policy with reservation.
creationTimestampNoOutput only. [Output Only] Creation timestamp in RFC3339 <https://www.ietf.org/rfc/rfc3339.txt> text format.
linkedCommitmentsNoOutput only. [Output Only] Full or partial URL to parent commitments. This field displays for reservations that are tied to multiple commitments.
deleteAfterDurationNoDuration time relative to reservation creation when Compute Engine will automatically delete this resource.
specificReservationNoReservation for instances with specific machine shapes.
aggregateReservationNoReservation for aggregated resources, providing shape flexibility.
earlyAccessMaintenanceNoIndicates the early access maintenance for the reservation. If this field is absent or set to NO_EARLY_ACCESS, the reservation is not enrolled in early access maintenance and the standard notice applies.
confidentialComputeTypeNo
reservationSharingPolicyNoSpecify the reservation sharing policy. If unspecified, the reservation will not be shared with Google Cloud managed services.
advancedDeploymentControlNoAdvanced control for cluster management, applicable only to DENSE deployment type reservations.
enableEmergentMaintenanceNoIndicates whether Compute Engine allows unplanned maintenance for your VMs; for example, to fix hardware errors.
specificReservationRequiredNoIndicates whether the reservation can be consumed by VMs with affinity for "any" reservation. If the field is set, then only VMs that target the reservation by name can consume from this reservation.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds substantial value by detailing the rich data returned—specifically listing hardware properties, aggregate reservations, commitments, and sharing settings—which helps the agent understand the information density of the response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two-sentence structure is logical: purpose first, then return value specification. The list of returned fields is long but serves the critical purpose of distinguishing from the 'basic_info' sibling. Could be slightly more concise ('Returns detailed hardware configuration, commitments, and sharing status'), but the specificity is justified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete for a read-only lookup tool with simple required parameters and existing output schema. Description covers input requirements and summarizes output fields sufficiently to guide selection without repeating the full output schema.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions already present. Description mentions the three required inputs but adds no additional semantic detail (e.g., name format constraints, whether project is numeric ID or string). Baseline score appropriate when schema carries full semantic load.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear specific verb ('Get') plus resource ('Compute Engine reservation details'). Effectively distinguishes from sibling 'get_reservation_basic_info' by enumerating specific detailed fields returned (machine type, guest accelerators, commitments, sharing settings, etc.), signaling this is the comprehensive metadata retrieval tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implicitly differentiates from 'basic_info' variant through the exhaustive field enumeration, which signals when detailed configuration is needed. However, lacks explicit guidance like 'use this instead of get_reservation_basic_info when you need hardware details.' Clearly identifies required inputs (project, zone, name).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_zone_operationA
Read-onlyIdempotent
Inspect

Get details of a zone operation, including its id, name, status, creation timestamp, error, warning, HTTP error message and HTTP error status code. Requires project, zone, and operation name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The operation name.
zoneYesRequired. The zone of the operation.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
idNoThe unique identifier for the operation.
nameNoName of the operation.
errorNoErrors encountered during the operation execution.
statusNoThe status of the operation.
warningsNoWarnings encountered during the operation execution.
createTimeNoCreation timestamp of the operation.
httpErrorMessageNoIf the operation fails, this field contains the HTTP error message that corresponds to the HTTP error code generated for the audit log.
httpErrorStatusCodeNoIf the operation fails, this field contains the HTTP error status code that corresponds to the HTTP error message generated for the audit log.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare readOnly/idempotent, but description adds valuable context about return payload structure (error messages, warning fields, status codes) that indicates potential failure states and monitoring utility beyond what annotations provide.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences: first establishes action and return fields, second states requirements. No冗余 content; appropriately front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriate for complexity given rich annotations and 100% schema coverage. Output schema exists so detailed return description isn't strictly necessary, though listing key fields is helpful. Could mention typical polling workflow for async operations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with full parameter descriptions. Description restates that project, zone, and operation name are required, which confirms but doesn't significantly enrich the schema semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Get' with specific resource 'zone operation' and clearly distinguishes from siblings (instance, disk, commitment operations). Lists specific return fields (id, status, error, etc.) clarifying scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States what the tool does but lacks explicit guidance on when to use it versus siblings. Missing crucial context that this is for polling asynchronous operation status returned by mutating calls like create_instance/delete_instance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_accelerator_typesA
Read-onlyIdempotent
Inspect

Lists the available Google Compute Engine accelerator types. Requires project and zone as input. Returns accelerator types, including id, creation timestamp, name, description, deprecated, zone, and maximum cards per instance.

ParametersJSON Schema
NameRequiredDescriptionDefault
zoneYesRequired. The zone of the accelerator types.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
acceleratorTypesNoThe list of accelerator types.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

While annotations declare readOnlyHint and idempotentHint, the description adds valuable return value semantics by listing specific fields returned (id, creation timestamp, deprecated status, maximum cards per instance), providing concrete expectations beyond the structured safety annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two well-structured sentences front-load the purpose first, then inputs, then outputs. The input requirement restatement is slightly redundant with schema constraints but maintains overall efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the simple two-parameter input, presence of output schema, and comprehensive annotations, the description adequately covers the essentials. Minor gap: no mention of pagination behavior or filtering capabilities typical for list operations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. The description mentions 'Requires project and zone as input' but adds no semantic depth, format specifications, or examples beyond what the schema already documents.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Lists') and a well-defined resource ('Google Compute Engine accelerator types'), clearly distinguishing it from sibling tools like list_instances or list_machine_types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through specific resource naming but lacks explicit guidance on when to choose this over similar list operations (e.g., list_machine_types for CPU vs accelerator types for GPUs) or prerequisites beyond parameter requirements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_commitment_reservationsA
Read-onlyIdempotent
Inspect

Lists reservations for a Compute Engine Commitment. Returns reservation details including name, ID, status, creation timestamp, specific reservation properties like machine type, guest accelerators and local SSDs, aggregate reservation properties like VM family and reserved resources, commitment and linked commitments, sharing settings, and resource status. Requires project, region, and commitment name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. Name of the commitment to look up reservations for.
regionYesRequired. The region of the commitment.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
reservationsNoThe list of reservations.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Enumerates specific return data fields (machine type, guest accelerators, local SSDs, VM family, sharing settings) beyond what annotations provide, giving the agent clear expectations about response richness without duplicating the output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three-sentence structure logically separates purpose, return values, and inputs. The comprehensive field list, while lengthy, is information-dense and appropriate for a list operation with complex return data.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive coverage of tool contract: clear purpose, detailed output summary (despite presence of output schema), and explicit input requirements. Appropriate for a read-only lookup tool with 3 parameters.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Input schema has 100% coverage with clear descriptions. The description aggregates the required inputs but adds minimal semantic detail beyond the schema's 'Required' markers and existing parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specifies exact verb (Lists) and resource scope (reservations for a Compute Engine Commitment), clearly distinguishing from siblings like list_reservations (general listing) and list_commitments (different resource).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context through the specific input requirements (commitment name) and scope limitation, but lacks explicit when/when-not guidance or named alternatives to sibling tools like get_reservation_details.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_commitmentsA
Read-onlyIdempotent
Inspect

Lists Compute Engine Commitments in a region. Details for each commitment include name, ID, status, plan, type, resources, and creation, start and end timestamps. Requires project and region as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
regionYesRequired. The region of the commitments.
projectYesRequired. Project ID for this request.
pageSizeNoOptional. The maximum number of commitments to return.
pageTokenNoOptional. A page token received from a previous call to list commitments.

Output Schema

ParametersJSON Schema
NameRequiredDescription
commitmentsNoThe list of commitments.
nextPageTokenNoA token that can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent/destructive status. Description adds value by detailing the specific output fields returned (name, ID, status, plan, type, resources, timestamps), which clarifies data richness beyond the schema annotations. No contradictions with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three logically ordered sentences: purpose first, output details second, input requirements third. No filler or tautology. Structure is front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 100% schema coverage, clear annotations, and an output schema present, the description provides adequate context by listing key return fields. Missing explicit mention of pagination behavior, but sufficiently complete for a standard list operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 100% description coverage, establishing baseline 3. Description mentions the required parameters (project, region) but does not add semantic elaboration beyond what the schema provides (e.g., no explanation of pagination behavior for pageToken/pageSize).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb ('Lists') and resource ('Compute Engine Commitments') with regional scope. Distinguishes from sibling 'get_commitment_basic_info' by enumerating detailed output fields (name, ID, status, plan, etc.) implying a comprehensive listing rather than basic single-resource retrieval.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States input requirements ('Requires project and region as input'), implying prerequisites. However, lacks explicit guidance on when to use this versus 'get_commitment_basic_info' (list all vs. get specific) or versus 'list_commitment_reservations'.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_disksB
Read-onlyIdempotent
Inspect

Lists Compute Engine disks. Details for each disk include name, ID, description, creation timestamp, size, type, status, last attach timestamp, and last detach timestamp. Requires project and zone as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
zoneYesRequired. The zone of the disks to list.
projectYesRequired. Project ID for this request.
pageSizeNoOptional. The maximum number of results per page that should be returned.
pageTokenNoOptional. The page token received from the previous call.

Output Schema

ParametersJSON Schema
NameRequiredDescription
disksNoThe list of disk basic info.
nextPageTokenNoThe page token to retrieve the next page of results.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnly/idempotent/destructive hints, so the description carries moderate burden. It adds valuable context by listing specific fields returned (last attach/detach timestamps, etc.), but fails to mention pagination behavior despite accepting pageSize/pageToken parameters, or any rate limiting concerns.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two efficient sentences with no filler. Front-loads the action in sentence one, follows with field details. The prerequisite mention ('Requires project...') is slightly misplaced at the end rather than with parameter context, but overall economical.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description appropriately previews return fields without duplicating the full structure. Adequate for a list operation, though mentioning the pagination model would have improved completeness given the presence of page tokens.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description redundantly states 'Requires project and zone as input' which duplicates the schema's 'Required' markers, and adds no semantic depth regarding pagination token handling or valid pageSize ranges.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action 'Lists Compute Engine disks' and enumerates detailed return fields (name, ID, status, timestamps, etc.). However, it does not explicitly distinguish when to use this versus sibling tools like get_disk_basic_info (single disk lookup) or list_instance_attached_disks (filtered by instance).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Mentions 'Requires project and zone as input' which states prerequisites but provides no guidance on when to choose this tool over alternatives (e.g., bulk listing vs. specific disk retrieval) or when pagination is necessary. No 'when-not-to-use' conditions are described.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_imagesA
Read-onlyIdempotent
Inspect

Lists Compute Engine Images. Details for each image include name, ID, status, family, and creation timestamp. Requires project as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
projectYesRequired. Project ID for this request.
pageSizeNoOptional. The maximum number of images to return.
pageTokenNoOptional. A page token received from a previous call to list images.

Output Schema

ParametersJSON Schema
NameRequiredDescription
imagesNoThe list of images.
nextPageTokenNoA token that can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate readOnlyHint=true and non-destructive nature, so the safety profile is covered. The description adds value by documenting what fields appear in the output (name, ID, status, etc.), but omits behavioral details about pagination mechanics, rate limits, or filtering capabilities that would help an agent handle large result sets.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with no redundancy: action statement, output details, and input requirement. Each sentence earns its place; appropriately front-loaded with the operation type before details.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and clear annotations, the description adequately covers the essentials by mentioning key return fields. However, for a paginated list operation, it could briefly acknowledge the pagination pattern to signal how to handle large datasets.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with project, pageSize, and pageToken fully documented. The description reinforces that project is required, but does not add semantic context beyond the schema (e.g., explaining that pageToken comes from a previous response). Baseline 3 for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clear verb 'Lists' and resource 'Compute Engine Images' with specific output fields enumerated (name, ID, status, family, creation timestamp). While it identifies the resource type, it does not explicitly differentiate from other 'list_' sibling tools or clarify that these are VM disk images versus other image types.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States the prerequisite 'Requires project as input' but provides no guidance on when to use pagination parameters (pageSize/pageToken) versus retrieving all results, and does not indicate when this tool should be preferred over other list operations or instance creation workflows.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_instance_attached_disksA
Read-onlyIdempotent
Inspect

Lists the disks attached to a Compute Engine virtual machine (VM) instance. For each attached disk, the response includes details such as kind, type, mode, saved state, source, device name, index, boot, initialize parameters, auto delete, licenses,, interface, guest OS features, disk encryption key, disk size, shielded instance initial state, force attach, and architecture. Requires project, zone, and instance name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The instance name.
zoneYesRequired. The zone of the instance.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
attachedDisksNoThe list of attached disks.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With annotations covering read-only/idempotent safety, the description adds value by enumerating 18+ specific fields returned (kind, type, encryption, architecture, etc.), giving agents clear insight into data richness without contradicting the safety profile.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Front-loaded with clear purpose, though the exhaustive field enumeration (18+ items with a typo 'licenses,,') could be condensed to 'detailed disk configuration including encryption, boot flags, and attachment metadata' without losing utility.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Well-covered with explicit input requirements, detailed output field preview, and strong annotations. Minor gap: no mention of pagination behavior or result limits, though presence of output schema reduces this burden.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear descriptions for project, zone, and name. The description restates these requirements but adds no supplemental semantics (format examples, validation rules, or relationship between zone and instance), warranting the baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Excellent specificity with 'Lists the disks attached to a Compute Engine virtual machine (VM) instance', clearly distinguishing from sibling 'list_disks' (which lists all disks in a zone) and other instance tools by scoping the operation to attached storage resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides implied usage through specificity (use when targeting a specific instance's disks), but lacks explicit guidance on when to prefer this over 'list_disks' or 'get_disk_basic_info', and doesn't mention prerequisites beyond the required parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_instance_group_managersA
Read-onlyIdempotent
Inspect

Lists Compute Engine managed instance groups (MIGs). Details for each MIG include name, ID, instance template, base instance name, target size, target stopped size, target suspended size, status and creation timestamp. Requires project and zone as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
zoneYesRequired. The zone of the instance group managers.
projectYesRequired. Project ID for this request.
pageSizeNoOptional. The maximum number of instance group managers to return.
pageTokenNoOptional. A page token received from a previous call to list instance group managers.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nextPageTokenNoA token that can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
instanceGroupManagersNoThe list of instance group managers.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, confirming safe read behavior. The description adds value by enumerating specific return fields (name, ID, template, target sizes, status) beyond the annotations, and explicitly stating the required scope (project and zone). It does not mention pagination behavior despite pageToken/pageSize parameters.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: purpose statement, return value enumeration, and input requirements. The field list is lengthy but necessary to distinguish from the 'basic info' sibling tool. No redundant or wasted language.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description appropriately summarizes return content via the field enumeration. It covers the required inputs and implies list behavior. With 4 parameters (2 required) and rich annotations, the description provides sufficient context, though mentioning pagination would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so parameters are fully documented in the schema. The description reinforces that project and zone are required inputs but does not add semantic details (formats, constraints) beyond the schema. Baseline 3 is appropriate given the schema carries the semantic burden.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Lists Compute Engine managed instance groups (MIGs)' with specific resource identification. It distinguishes from sibling get_instance_group_manager_basic_info by enumerating multiple return fields (name, ID, target size, etc.) implying a list operation, and distinguishes from list_managed_instances by targeting 'groups' rather than instances.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage by listing returned fields and stating input requirements ('Requires project and zone'), but lacks explicit guidance on when to use this versus get_instance_group_manager_basic_info (single lookup) or alternative list operations. No explicit 'when not to use' or alternative recommendations are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_instancesA
Read-onlyIdempotent
Inspect

Lists Compute Engine virtual machine (VM) instances. Details for each instance include name, ID, status, machine type, creation timestamp, and attached guest accelerators. Use other tools to get more details about each instance. Requires project and zone as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
zoneYesRequired. The zone of the instances.
projectYesRequired. Project ID for this request.
pageSizeNoOptional. The maximum number of instances to return.
pageTokenNoOptional. A page token received from a previous call to list instances.

Output Schema

ParametersJSON Schema
NameRequiredDescription
instancesNoThe list of instances.
nextPageTokenNoA token that can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations cover safety profile (readOnly, idempotent, non-destructive). Description adds value by listing specific returned fields (name, ID, status, etc.) and emphasizing project/zone requirements, but omits behavioral details like pagination behavior, rate limits, or latency expectations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences with logical progression: action definition, return value specification, and usage constraints. Front-loaded with the primary verb. Minor structural awkwardness in the final sentence combining two distinct thoughts (alternative tools and required parameters).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Appropriately complete given rich annotations and existing output schema. Mentions key fields returned and required parameters. Could benefit from explicitly noting paginated results, though pageToken parameter in schema implies this behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, establishing a baseline of 3. Description reinforces that project and zone are required but does not add semantic depth beyond the schema (e.g., no guidance on pageToken usage patterns or default pageSize behavior).

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

States specific action 'Lists' with clear resource 'Compute Engine virtual machine (VM) instances' and enumerates returned fields. Distinguishes from 'get' siblings by noting it provides basic info only, though could explicitly name the contrast with get_instance_basic_info.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit guidance on when not to use: 'Use other tools to get more details about each instance.' Also notes prerequisite inputs 'Requires project and zone as input,' clarifying the minimal required context for invocation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_instance_templatesB
Read-onlyIdempotent
Inspect

Lists Compute Engine instance templates. Details for each instance template include name, ID, description, machine type, region, and creation timestamp. Requires project as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
projectYesRequired. Project ID for this request.
pageSizeNoOptional. The maximum number of instance templates to return.
pageTokenNoOptional. A page token received from a previous call to list instance templates.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nextPageTokenNoA token that can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
instanceTemplatesNoThe list of instance templates.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false. The description adds valuable context about what fields are returned (name, ID, machine type, etc.), but fails to disclose pagination behavior despite the presence of pageToken/pageSize parameters in the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences with no wasted words. The first states the action, the second lists return fields and the required input. However, it could front-load the pagination aspect or filtering limitations to be more complete.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists, the description appropriately summarizes key return fields rather than detailing the full structure. However, for a list operation with pagination parameters, the description is incomplete as it omits how to handle paginated results.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema adequately documents all parameters including pagination controls. The description merely confirms that project is required, adding no significant semantic value beyond the schema, warranting the baseline score of 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description uses a specific verb ('Lists') and resource ('Compute Engine instance templates'), clearly stating what the tool does. However, it does not explicitly distinguish this list operation from sibling tools like 'get_instance_template_basic_info' or 'get_instance_template_properties', which retrieve single resources versus listing multiple.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description identifies the prerequisite ('Requires project as input'), but provides no guidance on when to use this tool versus the 'get' siblings for retrieving specific templates, nor does it mention pagination requirements for large result sets.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_machine_typesA
Read-onlyIdempotent
Inspect

Lists the available Google Compute Engine machine types. Requires project and zone as input. Returns machine types, including id, creationTimestamp, name, description, guest cpus, memory, image space, maximum persistent disks, maximum persisten disks size, deprecated, zone, is shared cpu, accelerators, and architecture.

ParametersJSON Schema
NameRequiredDescriptionDefault
zoneYesRequired. The zone of the machine types.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
machineTypesNoThe list of machine types.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and idempotentHint=true, establishing safe read behavior. The description adds valuable behavioral context by enumerating the specific fields returned (id, creationTimestamp, guestCpus, etc.), which helps the agent understand the data structure and richness of the response.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with purpose first, then requirements, then return values. However, it contains a typo ('persisten') and the exhaustive field list in the final sentence is slightly verbose given that an output schema exists, though the front-loading is effective.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only two well-documented parameters, complete annotations (readOnly, idempotent, non-destructive), and an output schema, the description provides sufficient context. It covers inputs, outputs, and basic requirements adequately for a straightforward listing operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents both required parameters. The description merely restates that these are required ('Requires project and zone as input') without adding semantic details like format conventions or validation rules, meeting the baseline expectation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'Lists the available Google Compute Engine machine types' with a specific verb and resource. However, it does not distinguish from siblings like list_accelerator_types or list_instances, which could help the agent select the correct inventory tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description notes prerequisite inputs ('Requires project and zone as input') but provides no guidance on when to use this tool versus alternatives like get_instance_basic_info or list_accelerator_types, nor does it mention typical use cases such as capacity planning or VM creation workflows.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_managed_instancesA
Read-onlyIdempotent
Inspect

Lists managed instances for a given managed instance group (MIG). For each instance, details include id, instance URL, instance status, and current action. Requires project, zone, and MIG name as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The instance group manager name.
zoneYesRequired. The zone of the instance group manager.
projectYesRequired. Project ID for this request.
pageSizeNoOptional. The maximum number of managed instances to return.
pageTokenNoOptional. A page token received from a previous call to list managed instances.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nextPageTokenNoA token that can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
managedInstancesNoThe list of managed instances.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare read-only/idempotent safety. Description adds valuable behavioral context by previewing return values (id, instance URL, instance status, current action) that would otherwise require inspecting the output schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two sentences with zero waste. First sentence establishes purpose and scope; second covers return value details and required inputs. Front-loaded and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 100% schema coverage, good annotations, and existing output schema, the description provides adequate completeness. Previews key return fields and identifies required inputs. Could mention pagination behavior but not required given schema completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, but description adds semantic value by mapping the 'name' parameter to 'MIG name', clarifying it refers to the instance group manager name, not the instance name. Confirms the three required parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Lists' with resource 'managed instances' and scope 'for a given managed instance group (MIG)'. Clearly distinguishes from sibling 'list_instances' (standalone VMs) and 'list_instance_group_managers' (the groups themselves) by specifying it returns instances within a MIG.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides clear context that this is for MIGs specifically, but does not explicitly mention sibling alternatives like 'list_instances' or when to use one versus the other. States required inputs (project, zone, MIG name) which implies prerequisites.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_reservationsB
Read-onlyIdempotent
Inspect

Lists Compute Engine reservations. Details for each reservation include name, ID, creation timestamp, zone, status, specific reservation required, commitment, and linked commitments. Requires project and zone as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
zoneYesRequired. The zone of the reservations.
projectYesRequired. Project ID for this request.
pageSizeNoOptional. The maximum number of reservations to return.
pageTokenNoOptional. A page token received from a previous call to list reservations.

Output Schema

ParametersJSON Schema
NameRequiredDescription
reservationsNoThe list of reservations.
nextPageTokenNoA token that can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true and destructiveHint=false, establishing safe read-only behavior. The description adds value by listing returned fields (name, ID, timestamp, etc.), which provides context beyond annotations. However, it omits pagination behavior, rate limiting, or zone validation rules that would be helpful for invocation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences: action, return details, and requirements. Front-loaded with the primary verb. Slightly redundant listing return fields since output schema exists, but not excessive. No wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Adequately complete for a list operation with rich annotations and 100% schema coverage. Includes required inputs and return field preview. Could improve by mentioning pagination (pageToken/pageSize relationship) or maximum result limits, but sufficient for basic invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage (all 4 parameters documented), the schema carries the semantic load. The description mentions 'Requires project and zone as input' which confirms but doesn't enhance the schema definitions. Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states the action (Lists) and resource (Compute Engine reservations) with specific scope. Implies plural listing operation distinguishing it from sibling 'get_reservation_basic_info' (singular fetch), though it doesn't explicitly name the sibling alternative.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

States input requirements ('Requires project and zone as input') but provides no guidance on when to use this tool versus siblings like 'get_reservation_basic_info' or 'list_commitment_reservations'. No mention of pagination strategy or filtering capabilities.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_snapshotsA
Read-onlyIdempotent
Inspect

Lists snapshots in a project providing basic information per snapshot including name, id, status, creation time, disk size, storage bytes, source disk, and source disk id. Requires project as input.

ParametersJSON Schema
NameRequiredDescriptionDefault
projectYesRequired. Project ID for this request.
pageSizeNoOptional. The maximum number of snapshots to return.
pageTokenNoOptional. A page token received from a previous call to list snapshots.

Output Schema

ParametersJSON Schema
NameRequiredDescription
snapshotsNoThe list of snapshots.
nextPageTokenNoA token that can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already establish this as read-only, idempotent, and non-destructive. The description adds value by detailing what data fields are returned (name, id, status, etc.), which helps set output expectations. However, it omits mention of pagination behavior despite the presence of pageToken/pageSize parameters in the schema.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description consists of two efficient sentences. The first sentence front-loads the core action and return value summary; the second states the critical prerequisite. No redundancy or filler text is present.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema and comprehensive annotations (readOnly, idempotent), the description adequately covers the tool's purpose and requirements. It appropriately leverages the output schema by not over-explaining return values while still providing a useful field summary. Minor gap: pagination behavior is implicit in the schema but not described.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema coverage, the baseline is appropriately 3. The description mentions 'Requires project as input,' which aligns with the schema but does not add additional semantic context such as format constraints, validation rules, or relationships between pagination parameters.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific verb (Lists), resource (snapshots), and scope (in a project). It further distinguishes the tool by enumerating the specific fields returned (name, id, status, creation time, disk size, etc.), clearly positioning it as a basic listing tool among siblings.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description mentions the 'project' requirement ('Requires project as input') and implies scope by specifying 'basic information,' suggesting this is for overviews rather than detailed inspection. However, it lacks explicit when-to-use guidance or named alternatives compared to other sibling tools like potential get_snapshot operations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

reset_instanceA
Destructive
Inspect

Resets a Google Compute Engine virtual machine (VM) instance. Requires project, zone, and instance name as input. Proceed only if there is no error in response and the status of the operation is DONE without any errors. To get details of the operation, use the get_zone_operation tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The instance name.
zoneYesRequired. The zone of the instance.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
operationNameNoThe operation name of the instance reset.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already mark this as destructive and non-idempotent. The description adds valuable operational context that this is an asynchronous operation requiring polling via get_zone_operation and checking for DONE status, which is critical for agent success.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three well-structured sentences cover purpose, inputs, and operational guidance efficiently. The parameter sentence is slightly redundant given complete schema coverage, but the operational guidance sentence earns its place by explaining the async polling pattern.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the presence of an output schema, the description appropriately avoids repeating return value details. It successfully covers the critical asynchronous behavior and destructive nature (via annotations) necessary for an agent to safely manage a VM reset operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the baseline is 3. The description lists the three required parameters but adds no additional semantic meaning, formatting guidance, or selection criteria beyond what the schema already provides.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a specific verb ('Resets') and clear resource ('Google Compute Engine virtual machine instance'), distinguishing it from siblings like start_instance, stop_instance, and delete_instance.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit post-invocation guidance (wait for DONE status) and names get_zone_operation as a follow-up tool for operation details. However, it lacks pre-invocation guidance on when to select reset vs. alternatives like stop_instance or delete_instance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

set_instance_machine_typeA
Destructive
Inspect

Sets the machine type for a stopped Google Compute Engine instance to the specified machine type. Requires project, zone, instance name and machine type as input. Proceed only if there is no error in response and the status of the operation is DONE without any errors. To get details of the operation, use the get_zone_operation tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The instance name.
zoneYesRequired. The zone of the instance.
projectYesRequired. Project ID for this request.
machineTypeYesRequired. The machine type of the instance.

Output Schema

ParametersJSON Schema
NameRequiredDescription
operationNameNoThe operation name of the instance machine type change.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Adds crucial behavioral context beyond annotations: specifies the instance must be stopped (state requirement), describes the async operation pattern requiring status polling ('Proceed only if...status...is `DONE`'), and references the companion tool for operation tracking. Annotations only indicate destructiveness; description explains the operational workflow.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three logically ordered sentences: purpose/parameters, success validation, and operation tracking. Minor redundancy in first sentence ('machine type...machine type'). Otherwise efficient with no wasted words.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the output schema exists and the tool performs a specific GCE configuration change, the description adequately covers the critical operational aspects: stopped state requirement, async operation verification (DONE status), and operation polling workflow. Appropriate for this complexity level.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the schema already fully documents all four parameters. Description merely lists them ('Requires project, zone, instance name and machine type') without adding semantic depth, syntax examples, or validation rules beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the specific action (Sets), resource (machine type for Google Compute Engine instance), and critical constraint (stopped instance), distinguishing it from siblings like start_instance or reset_instance that operate on running instances.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit alternative tool reference ('use the `get_zone_operation` tool') for checking operation status and states the stopped instance prerequisite. Lacks explicit 'when not to use' (e.g., don't use on running instances) though this is implied.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

start_instanceA
Destructive
Inspect

Starts a Google Compute Engine virtual machine (VM) instance. Requires project, zone, and instance name as input. Proceed only if there is no error in response and the status of the operation is DONE without any errors. To get details of the operation, use the get_zone_operation tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The instance name.
zoneYesRequired. The zone of the instance.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
operationNameNoThe operation name of the instance start.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations declare destructiveHint=true and readOnlyHint=false; description adds critical behavioral context that this is an async operation requiring polling for 'DONE' status via get_zone_operation, which is not inferable from annotations alone. No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Three sentences efficiently structured: purpose first, requirements second, workflow/error-handling third. Zero redundancy; every sentence earns its place by conveying distinct information not fully captured in structured fields.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Completeness is strong given output schema exists: description correctly focuses on operation polling pattern (specific to GCP async operations) rather than return values. Minor gap: does not mention idempotency behavior (false per annotations) or preconditions like instance must be stopped.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% and description repeats parameter names ('project, zone, and instance name') without adding semantic details (formats, constraints, example values). Baseline 3 appropriate for high schema coverage with minimal additive description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description uses specific verb 'Starts' with specific resource 'Google Compute Engine virtual machine (VM) instance' and clearly distinguishes from siblings like stop_instance, reset_instance, and delete_instance through the explicit action stated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly names sibling tool `get_zone_operation` as the alternative for checking operation details, and provides explicit workflow guidance ('Proceed only if... status is `DONE`'). Could improve by contrasting explicitly with stop/reset when multiple state-changing options apply.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

stop_instanceA
Destructive
Inspect

Stops a Google Compute Engine virtual machine (VM) instance. Requires project, zone, and instance name as input. Proceed only if there is no error in response and the status of the operation is DONE without any errors. To get details of the operation, use the get_zone_operation tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Identifier. The instance name.
zoneYesRequired. The zone of the instance.
projectYesRequired. Project ID for this request.

Output Schema

ParametersJSON Schema
NameRequiredDescription
operationNameNoThe operation name of the instance stop.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate destructiveHint=true, but description adds critical behavioral context about the async operation pattern: the response contains an operation object that must reach 'DONE' status to be considered complete. This operational semantic is essential for correct agent behavior and not covered by annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences with purpose front-loaded. Structure is logical: action definition, input requirements, completion criteria, follow-up tool. Minor redundancy listing parameters already well-documented in schema, but overall efficient with no filler content.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Complete for a destructive async operation tool. Covers the VM stopping action, required inputs, asynchronous completion semantics, and references the polling tool. Output schema exists (context signal), so description appropriately omits return value details. Addresses the key complexity aspects (async nature, destructiveness).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% with clear parameter descriptions. Description states 'Requires project, zone, and instance name as input' which merely restates the schema requirements without adding semantic value (formats, constraints, or examples). Baseline 3 appropriate for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Specific verb 'Stops' with specific resource 'Google Compute Engine virtual machine (VM) instance' clearly identifies the action and target. Clearly distinguishes from siblings like start_instance, reset_instance, and delete_instance by specifying the stop operation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Provides explicit workflow guidance on handling the asynchronous operation response: checking that status is 'DONE' without errors before proceeding. References specific sibling tool 'get_zone_operation' for polling operation details, which helps the agent understand the async pattern. Lacks explicit differentiation from similar mutating siblings (e.g., when to stop vs reset).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources