Skip to main content
Glama

Server Details

Interact with your Google Cloud Datastream resources using natural language commands.

Status
Healthy
Last Tested
Transport
Streamable HTTP
URL

Glama MCP Gateway

Connect through Glama MCP Gateway for full control over tool access and complete visibility into every call.

MCP client
Glama
MCP server

Full call logging

Every tool call is logged with complete inputs and outputs, so you can debug issues and audit what your agents are doing.

Tool access control

Enable or disable individual tools per connector, so you decide what your agents can and cannot do.

Managed credentials

Glama handles OAuth flows, token storage, and automatic rotation, so credentials never expire on your clients.

Usage analytics

See which tools your agents call, how often, and when, so you can understand usage patterns and catch anomalies.

100% free. Your data is private.
Tool DescriptionsA

Average 4.1/5 across 10 of 10 tools scored. Lowest: 3.4/5.

Server CoherenceA
Disambiguation5/5

Every tool has a clearly distinct purpose targeting specific resources and actions. The stream-related tools (get_stream, delete_stream, run_stream) are clearly differentiated from object tools (get_stream_object, list_stream_objects, lookup_stream_object) and other resource types (connection_profiles, static_ips). No ambiguity exists between tools.

Naming Consistency5/5

All tools follow a consistent verb_noun pattern with snake_case throughout. The naming convention is perfectly predictable: action_resource (e.g., delete_stream, get_operation, list_connection_profiles). No deviations or mixed conventions exist.

Tool Count5/5

With 10 tools, this server is well-scoped for managing data streams and related resources. The count provides comprehensive coverage without being overwhelming, with each tool serving a clear purpose in the domain of stream management and monitoring.

Completeness4/5

The toolset provides excellent coverage for stream lifecycle management (create implied by run_stream, get, list, delete, run) and related resources. Minor gaps exist such as no explicit create_stream tool (though run_stream might handle this) and no update operations for streams or objects, but agents can work effectively with the provided surface.

Available Tools

10 tools
delete_streamA
Destructive
Inspect

Deletes a stream, specified by the provided resource 'name' parameter.

  • The resource 'name' parameter is in the form: 'projects/{project name}/locations/{location}/streams/{stream name}', for example: 'projects/my-project/locations/us-central1/streams/my-streams'.

  • This tool returns a long-running operation. Use the 'get_operation' tool with the returned operation name to poll its status until it completes. Operation may take several minutes; do not check more often than every ten seconds.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. The name of the stream resource to delete.
requestIdNoOptional. A request ID to identify requests. Specify a unique request ID so that if you must retry your request, the server will know to ignore the request if it has already been completed. The server will guarantee that for at least 60 minutes after the first request. For example, consider a situation where you make an initial request and the request times out. If you make the request again with the same request ID, the server can check if original operation with the same request ID was received, and if so, will ignore the second request. This prevents clients from accidentally creating duplicate commitments. The request ID must be a valid UUID with the exception that zero UUID is not supported (00000000-0000-0000-0000-000000000000).

Output Schema

ParametersJSON Schema
NameRequiredDescription
doneNoIf the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
nameNoThe server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
errorNoThe error result of the operation in case of failure or cancellation.
metadataNoService-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
responseNoThe normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already indicate destructiveHint=true, readOnlyHint=false, openWorldHint=false, and idempotentHint=false, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it discloses that the tool returns a long-running operation, requires polling with get_operation, and has timing constraints (minutes to complete, poll every 10 seconds). However, it doesn't mention potential side effects like data loss or dependencies, which could be relevant for a destructive operation.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: the first sentence states the purpose, followed by bullet points for parameter format and behavioral details. Every sentence adds value—no redundancy or fluff. It's front-loaded with the core action and resource, making it easy to scan and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (destructive operation with long-running response), the description is complete: it explains the purpose, parameter format, return behavior (long-running operation), and post-invocation steps (polling with get_operation). With annotations covering safety and an output schema likely detailing the operation response, no critical gaps remain. It effectively complements the structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing detailed documentation for both parameters (name and requestId). The description adds semantic context by explaining the format of the 'name' parameter with a concrete example ('projects/my-project/locations/us-central1/streams/my-streams'), which clarifies usage beyond the schema's generic description. It doesn't add details for requestId, but the schema already covers it thoroughly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Deletes a stream') and resource ('a stream, specified by the provided resource 'name' parameter'), distinguishing it from sibling tools like get_stream, list_streams, or run_stream. It provides a concrete example of the resource format, making the purpose unambiguous and specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool (to delete a stream) and provides clear guidance on what to do after invocation: 'Use the 'get_operation' tool with the returned operation name to poll its status until it completes.' It also specifies timing constraints ('Operation may take several minutes; do not check more often than every ten seconds'), offering comprehensive usage instructions.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_operationA
Read-onlyIdempotent
Inspect

Gets the status of a long-running operation.

Usage Some tools (for example, run_stream) return a long-running operation. You can use this tool to get the status of the operation. It can be called repeatedly until the operation is complete.

Parameters

  • name: The name of the operation to get.

    • name should be the name returned by the tool that initiated the operation.

    • name should be in the format of: projects/{project}/locations/{location}/operations/{operation}.

Returns

  • An Operation object that contains the status of the operation.

  • If the operation is not complete, the response will be empty. Do not check more than every ten seconds.

  • If the operation is complete, the response will contain either:

    • A response field that contains the result of the operation and indicates that it was successful.

    • A error field that indicates any errors that occurred during the operation.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameNoThe name of the operation resource.

Output Schema

ParametersJSON Schema
NameRequiredDescription
doneNoIf the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
nameNoThe server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
errorNoThe error result of the operation in case of failure or cancellation.
metadataNoService-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
responseNoThe normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already provide readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds valuable behavioral context beyond annotations: it explains the polling pattern ('called repeatedly until the operation is complete'), rate limiting guidance ('Do not check more than every ten seconds'), and response structure details for incomplete vs. complete operations. This significantly enhances understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Usage, Parameters, Returns) and front-loaded information. Most sentences earn their place by providing essential guidance, though the Returns section could be slightly more concise. Overall, it's appropriately sized and organized for the tool's complexity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (polling operation status), rich annotations, 100% schema coverage, and the presence of an output schema, the description is complete. It covers purpose, usage context, parameter semantics, behavioral details (polling frequency, response structure), and return value explanations, providing all necessary context for an AI agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds meaningful semantic context beyond the schema: it specifies that the name 'should be the name returned by the tool that initiated the operation' and provides the exact format 'projects/{project}/locations/{location}/operations/{operation}'. This clarifies usage and constraints, elevating the score above baseline.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verb+resource ('Gets the status of a long-running operation') and distinguishes it from siblings by explaining it's for checking operations returned by other tools like `run_stream`. This is not a tautology and provides meaningful differentiation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit guidance on when to use this tool ('Some tools (for example, `run_stream`) return a long-running operation. You can use this tool to get the status of the operation. It can be called repeatedly until the operation is complete.') and includes a sibling reference, making it clear when this tool is appropriate versus alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_streamB
Read-onlyIdempotent
Inspect

Get details of the stream specified by the provided resource 'name' parameter.

  • The resource 'name' parameter is in the form: 'projects/{project name}/locations/{location}/streams/{stream name}', for example: 'projects/my-project/locations/us-central1/streams/my-streams'.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. The name of the stream resource to get.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nameNoOutput only. Identifier. The stream's name.
stateNoThe state of the stream.
errorsNoOutput only. Errors on the Stream.
labelsNoLabels.
ruleSetsNoOptional. Rule sets to apply to the stream.
createTimeNoOutput only. The creation time of the stream.
updateTimeNoOutput only. The last update time of the stream.
backfillAllNoAutomatically backfill objects included in the stream source configuration. Specific objects can be excluded.
displayNameNoRequired. Display name.
backfillNoneNoDo not automatically backfill any objects.
satisfiesPziNoOutput only. Reserved for future use.
satisfiesPzsNoOutput only. Reserved for future use.
sourceConfigNoRequired. Source connection profile configuration.
lastRecoveryTimeNoOutput only. If the stream was recovered, the time of the last recovery. Note: This field is currently experimental.
destinationConfigNoRequired. Destination connection profile configuration.
customerManagedEncryptionKeyNoImmutable. A reference to a KMS encryption key. If provided, it will be used to encrypt the data. If left blank, data will be encrypted using an internal Stream-specific encryption key provisioned through KMS.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide key behavioral traits: readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false. The description adds minimal context beyond this—it clarifies the format of the 'name' parameter but doesn't disclose additional behaviors like error handling, rate limits, or authentication needs. With annotations covering safety and idempotency, the description meets a baseline but lacks rich behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a bullet point for parameter details. Every sentence earns its place: the first defines the tool's function, and the second clarifies parameter format with an example. No wasted words, and the structure is clear and efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (1 required parameter), high schema coverage (100%), presence of annotations, and an output schema (which handles return values), the description is reasonably complete. It covers the purpose and parameter format, though it lacks usage guidelines. For a simple read operation, this is sufficient but not exhaustive.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the 'name' parameter fully documented in the schema. The description adds an example of the 'name' format ('projects/{project name}/locations/{location}/streams/{stream name}') and a concrete instance, which provides useful semantic context beyond the schema's generic description. This compensates adequately given the high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get details of the stream specified by the provided resource 'name' parameter.' This is a specific verb ('Get details') + resource ('stream') combination. However, it doesn't explicitly distinguish this from sibling tools like 'get_stream_object' or 'lookup_stream_object', which might retrieve different aspects of stream-related data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'list_streams' (for listing multiple streams) or 'get_stream_object' (for retrieving stream objects), nor does it specify prerequisites or contexts for usage. The agent must infer usage from the tool name and description alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

get_stream_objectA
Read-onlyIdempotent
Inspect

Get details of the stream object specified by the provided resource 'name' parameter.

  • The resource 'name' parameter is in the form: 'projects/{project name}/locations/{location}/streams/{stream name}/objects/{stream object name}', for example: 'projects/my-project/locations/us-central1/streams/my-stream/objects/my-stream-object'.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. The name of the stream object resource to get.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nameNoOutput only. Identifier. The object resource's name.
errorsNoOutput only. Active errors on the object.
createTimeNoOutput only. The creation time of the object.
updateTimeNoOutput only. The last update time of the object.
backfillJobNoThe latest backfill job that was initiated for the stream object.
displayNameNoRequired. Display name.
sourceObjectNoThe object identifier in the data source.
customizationRulesNoOutput only. The customization rules for the object. These rules are derived from the parent Stream's `rule_sets` and represent the intended configuration for the object.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, destructiveHint=false, idempotentHint=true, and openWorldHint=false, covering safety and idempotency. The description adds no behavioral context beyond what annotations provide (e.g., no rate limits, auth needs, or response format details), but it doesn't contradict annotations. With annotations doing heavy lifting, a baseline 3 is appropriate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose in the first sentence, followed by a bullet point with parameter format details and an example. Every sentence earns its place by adding necessary context without redundancy, making it efficient and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (1 parameter, 100% schema coverage), rich annotations covering safety and idempotency, and the presence of an output schema (which handles return values), the description is complete. It provides purpose, parameter format, and differentiation from siblings, leaving no gaps for the agent to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the schema fully documenting the single required 'name' parameter. The description adds value by providing the exact format and an example of the 'name' parameter, which clarifies semantics beyond the schema's generic description. However, since the schema already covers the parameter well, this earns a baseline 3.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get details') and resource ('stream object specified by the provided resource name parameter'), distinguishing it from siblings like 'get_stream' (which gets stream details) and 'list_stream_objects' (which lists multiple objects). The example further clarifies the exact resource type.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implicitly indicates usage when you need details of a specific stream object (vs. listing multiple objects with 'list_stream_objects'), but it doesn't explicitly state when to use alternatives like 'lookup_stream_object' or provide exclusion criteria. The context is clear but lacks explicit sibling differentiation.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_connection_profilesA
Read-onlyIdempotent
Inspect

Lists connection profiles in a given project and location.

For example: { parent: "projects/my-project/locations/us-central1" create_time_after: 2025-10-02T10:15:33Z create_time_before: 2025-10-03T00:00:00Z display_name: bookstore page_size: 100 } will return up to 100 connection profiles in projects/my-project/locations/us-central1 that were created on or after 2025-10-02T10:15:33 UTC and before 2025-10-03T00:00:00 UTC, and have "bookstore" in their display name.

ParametersJSON Schema
NameRequiredDescriptionDefault
parentYesRequired. The parent that owns the collection of connection profiles. Must be in the format `projects/*/locations/*`. For example: 'projects/my-project/locations/us-central1'
pageSizeNoOptional. Use to limit the number of connection profiles returned. Valid values are between 1 and 1000 inclusive. Defaults to 1000 if not provided or outside of the valid range. Note that due to filtering, it is possible to get no results (but a next_page_token) so you should keep calling this method until you get a response with an empty next_page_token.
pageTokenNoOptional. A page token, received from a previous `list_connection_profiles` call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to `list_connection_profiles` must match the call that provided the page token.
displayNameNoOptional. Use to get connection profiles whose display name contains the provided name.
createTimeAfterNoOptional. Use to get connection profiles that were created on or after the provided date/time, formatted as RFC-3339. Common Examples: 2023-09-24T15:30:00Z or 2023-09-24T15:30:00.000+09:00.
createTimeBeforeNoOptional. Use to get connection profiles that were created before the provided date/time, formatted as RFC-3339. Common Examples: 2023-09-24T15:30:00Z or 2023-09-24T15:30:00.000+09:00.

Output Schema

ParametersJSON Schema
NameRequiredDescription
unreachableNoLocations that could not be reached.
nextPageTokenNoA token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
connectionProfilesNoList of connection profiles.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable behavioral context beyond annotations by explaining pagination behavior ('will return up to 100 connection profiles'), filtering logic ('that were created on or after... and before...'), and the relationship between parameters and results, which helps the agent understand how the tool behaves operationally.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: a clear purpose statement followed by a detailed, well-formatted example that demonstrates usage without redundancy. Every sentence serves a purpose, and the example is directly instructional without extraneous information.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 parameters, 1 required), rich annotations, and the presence of an output schema, the description is complete enough. It explains the core functionality and provides a practical example, while structured fields handle parameter details, safety, and return values. No critical gaps remain for a listing tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter well-documented in the schema. The description adds minimal semantic value beyond the schema by illustrating how parameters interact in a concrete example (e.g., combining parent, display_name, and date filters), but doesn't provide new parameter-specific insights. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Lists') and resource ('connection profiles in a given project and location'), making the purpose immediately understandable. However, it doesn't explicitly differentiate this tool from sibling list tools like 'list_static_ips' or 'list_streams', which would require mentioning what makes connection profiles distinct from those other resources.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it includes a helpful example, it doesn't mention sibling tools like 'list_static_ips' or 'list_streams' to help the agent choose between different listing operations, nor does it specify prerequisites or constraints beyond what's in the parameters.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_static_ipsA
Read-onlyIdempotent
Inspect

Lists static IP addresses of the provided resource name that need to be allowlisted by the customer when using the static-IP connectivity method. Returns up to 100 IP addresses.

  • The resource 'name' parameter is in the form 'projects/{project name}/locations/{location}', for example: 'projects/my-project/locations/us-central1'.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. The resource name for the location for which static IPs should be returned. Must be in the format `projects/*/locations/*`. For example: 'projects/my-project/locations/us-central1'

Output Schema

ParametersJSON Schema
NameRequiredDescription
staticIpsNolist of static ips by account
nextPageTokenNoA token that can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, non-destructive, and idempotent behavior, but the description adds valuable context beyond that: it specifies the return limit ('Returns up to 100 IP addresses') and clarifies the resource format with an example. This enhances understanding of the tool's operational constraints without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in three sentences: purpose, behavioral constraint, and parameter clarification. Each sentence adds value without redundancy, and key information is front-loaded. The bullet point format for the example enhances readability without wasting space.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's low complexity (one parameter), rich annotations (covering safety and idempotency), and the presence of an output schema, the description is complete. It covers purpose, usage context, behavioral limits, and parameter formatting, providing all necessary context for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents the 'name' parameter. The description repeats the format example from the schema but doesn't add new semantic meaning beyond what's already in the structured data. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Lists static IP addresses'), identifies the resource ('of the provided resource name'), and specifies the purpose ('that need to be allowlisted by the customer when using the static-IP connectivity method'). It distinguishes itself from siblings like 'list_streams' or 'list_connection_profiles' by focusing on static IPs for allowlisting.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: when customers need to allowlist IPs for static-IP connectivity. It doesn't explicitly state when not to use it or name specific alternatives among siblings, but the context is sufficient to understand its application scenario.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_stream_objectsA
Read-onlyIdempotent
Inspect

Lists stream objects in a given stream.

  • Parent parameter is in the form 'projects/{project name}/locations/{location}/streams/{stream name}', for example: 'projects/my-project/locations/us-central1/streams/my-stream'.

  • Not all the details of the stream objects are returned.

  • To get the full details of a specific stream object, use the 'get_stream_object' tool.

ParametersJSON Schema
NameRequiredDescriptionDefault
parentYesRequired. The parent stream that owns the collection of objects.
pageSizeNoOptional. Maximum number of objects to return. Default is 50. The maximum value is 1000; values above 1000 will be coerced to 1000.
pageTokenNoOptional. Page token received from a previous `ListStreamObjectsRequest` call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to `ListStreamObjectsRequest` must match the call that provided the page token.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nextPageTokenNoA token, which can be sent as `page_token` to retrieve the next page.
streamObjectsNoList of stream objects.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already cover read-only, non-destructive, and idempotent behavior, but the description adds valuable context: it notes that 'Not all the details of the stream objects are returned' and explains pagination behavior via the 'pageToken' parameter. This enhances understanding beyond the annotations, though it doesn't mention rate limits or auth needs.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is front-loaded with the core purpose, followed by bullet points that efficiently cover key details like parameter format, limitations, and alternatives. Every sentence adds value without redundancy, making it highly concise and well-structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a list operation with pagination), rich annotations (covering safety and behavior), 100% schema coverage, and the presence of an output schema, the description is complete. It addresses purpose, usage guidelines, limitations, and parameter examples, leaving no significant gaps for the agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all parameters. The description adds minimal param semantics: it provides an example format for the 'parent' parameter but doesn't explain 'pageSize' or 'pageToken' beyond what the schema already states. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Lists stream objects') and resource ('in a given stream'), distinguishing it from siblings like 'list_streams' (which lists streams themselves) and 'get_stream_object' (which retrieves full details of a single object). The purpose is unambiguous and well-defined.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides when to use this tool vs. alternatives: it states 'To get the full details of a specific stream object, use the 'get_stream_object' tool.' This gives clear guidance on tool selection, and the context of listing objects within a specific stream is well-established.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

list_streamsA
Read-onlyIdempotent
Inspect

Lists streams in a given project and location.

For example: { parent: "projects/my-project/locations/us-central1" create_time_after: 2025-10-02T10:15:33Z create_time_before: 2025-10-03T00:00:00Z display_name: bookstore page_size: 100 running: true } will return up to 100 running streams in projects/my-project/locations/us-central1 that were created on or after 2025-10-02T10:15:33 UTC and before 2025-10-03T00:00:00 UTC, and have "bookstore" in their display name.

ParametersJSON Schema
NameRequiredDescriptionDefault
failedNoOptional. Use to get streams that are currently in state FAILED or FAILED_PERMANENTLY.
parentYesRequired. The parent that owns the collection of streams. Must be in the format `projects/*/locations/*`. For example: 'projects/my-project/locations/us-central1'
runningNoOptional. Use to get streams that are currently in state RUNNING.
pageSizeNoOptional. Use to limit the number of streams returned. Valid values are between 1 and 1000 inclusive. Defaults to 1000 if not provided or outside of the valid range. Note that due to filtering, it is possible to get no results (but a next_page_token) so you should keep calling this method until you get a response with an empty next_page_token.
pageTokenNoOptional. A page token, received from a previous `list_streams` call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to `list_streams` must match the call that provided the page token.
displayNameNoOptional. Use to get streams whose display name contains the provided name.
createTimeAfterNoOptional. Use to get streams that were created on or after the provided date/time, formatted as RFC-3339. Common Examples: 2023-09-24T15:30:00Z or 2023-09-24T15:30:00.000+09:00.
createTimeBeforeNoOptional. Use to get streams that were created before the provided date/time, formatted as RFC-3339. Common Examples: 2023-09-24T15:30:00Z or 2023-09-24T15:30:00.000+09:00.

Output Schema

ParametersJSON Schema
NameRequiredDescription
streamsNoList of streams
unreachableNoLocations that could not be reached.
nextPageTokenNoA token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, covering safety and idempotency. The description adds valuable behavioral context beyond annotations: it explains pagination behavior ('return up to 100', implying limits), filtering logic (AND-based on multiple parameters), and the effect of 'pageSize' (with the example showing it caps results). This enriches the agent's understanding of how the tool behaves in practice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured: a concise purpose statement followed by a detailed example that demonstrates usage and clarifies behavior. Every sentence earns its place by providing actionable information without redundancy, and the example is well-integrated to enhance understanding without verbosity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, filtering, pagination), rich annotations (readOnly, idempotent, non-destructive), and the presence of an output schema, the description is complete. It covers key aspects like filtering logic, pagination hints, and example usage, compensating adequately where structured data might not convey practical application. No critical gaps remain for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents all 8 parameters. The description adds minimal semantic value beyond the schema: it illustrates parameter usage in the example (e.g., 'create_time_after', 'display_name') but does not explain syntax, formats, or interactions not already in the schema descriptions. This meets the baseline for high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Lists streams') with the resource ('in a given project and location'), making the purpose immediately apparent. It distinguishes this from siblings like 'get_stream' (which retrieves a single stream) and 'delete_stream' (which removes streams), establishing clear functional boundaries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage through the example (e.g., filtering by time, name, state), but does not explicitly state when to use this tool versus alternatives like 'list_stream_objects' or 'get_stream'. It provides contextual guidance on filtering but lacks explicit when/when-not directives or named alternatives for different query needs.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

lookup_stream_objectB
Read-onlyIdempotent
Inspect

Lookup a stream object by its source object identifier. Parameters:

  • The 'parent' parameter is the name of the stream in the form: 'projects/{project name}/locations/{location}/streams/{stream name}', for example: 'projects/my-project/locations/us-central1/streams/my-stream'.

  • The 'source_object_identifier' parameter is the source database object identifier. Different source databases have different identifier formats. Examples:

    • Oracle, PostgreSQL, SQL Server and Spanner databases the identifier is 'schema' and 'table'.

    • MySQL databases the identifier is 'database' and 'table'.

ParametersJSON Schema
NameRequiredDescriptionDefault
parentYesRequired. The parent stream that owns the collection of objects.
sourceObjectIdentifierYesRequired. The source object identifier which maps to the stream object.

Output Schema

ParametersJSON Schema
NameRequiredDescription
nameNoOutput only. Identifier. The object resource's name.
errorsNoOutput only. Active errors on the object.
createTimeNoOutput only. The creation time of the object.
updateTimeNoOutput only. The last update time of the object.
backfillJobNoThe latest backfill job that was initiated for the stream object.
displayNameNoRequired. Display name.
sourceObjectNoThe object identifier in the data source.
customizationRulesNoOutput only. The customization rules for the object. These rules are derived from the parent Stream's `rule_sets` and represent the intended configuration for the object.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, idempotentHint=true, and destructiveHint=false, so the agent knows this is a safe, repeatable read operation. The description adds no behavioral context beyond what annotations provide, such as rate limits, authentication needs, or what 'lookup' entails operationally. However, it doesn't contradict annotations, so it meets the lower bar with annotations present.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement followed by a bulleted parameter explanation. It's appropriately sized with no wasted sentences, though the parameter details are somewhat verbose. Every sentence adds value, making it efficient for an agent to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (multiple database identifier types) and rich annotations/output schema, the description is reasonably complete. It explains parameter nuances that the schema alone might not convey fully. However, it lacks usage guidelines and behavioral context beyond annotations, leaving some gaps in overall completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema fully documents both parameters. The description adds valuable semantic context: it explains the 'parent' parameter format with an example and clarifies that 'source_object_identifier' varies by database type with specific examples (e.g., schema/table for Oracle). This goes beyond the schema's generic descriptions, enhancing understanding.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'lookup' and the resource 'stream object by its source object identifier', making the purpose specific and understandable. However, it doesn't explicitly distinguish this tool from sibling tools like 'get_stream_object' or 'list_stream_objects', which likely have related but different functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like 'get_stream_object' or 'list_stream_objects', nor does it specify prerequisites, exclusions, or typical use cases. The agent must infer usage from the tool name and parameters alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

run_streamAInspect

Starts an already created stream, specified by the provided resource 'name' parameter.

Parameters

  • 'name': The resource name of the stream to start.

    • 'name' should be in the format of: 'projects/{project name}/locations/{location}/streams/{stream name}', for example: 'projects/my-project/locations/us-central1/streams/my-streams'.

  • 'force': Whether to run the stream without running prior configuration verification. The default is 'false'.

Returns

  • This tool returns a long-running operation. Use the 'get_operation' tool with the returned operation name to poll its status until it completes. Operation may take several minutes; do not check more often than every ten seconds.

ParametersJSON Schema
NameRequiredDescriptionDefault
nameYesRequired. Name of the stream resource to start, in the format: projects/{project_id}/locations/{location}/streams/{stream_name}
forceNoOptional. Update the stream without validating it.
cdcStrategyNoOptional. The CDC strategy of the stream. If not set, the system's default value will be used.

Output Schema

ParametersJSON Schema
NameRequiredDescription
doneNoIf the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
nameNoThe server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
errorNoThe error result of the operation in case of failure or cancellation.
metadataNoService-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
responseNoThe normal, successful response of the operation. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations indicate this is a non-readOnly, non-destructive operation, and the description adds valuable behavioral context beyond that: it discloses that the tool returns a long-running operation requiring polling with 'get_operation', specifies operation duration ('may take several minutes'), and advises on polling frequency ('do not check more often than every ten seconds'). This enriches the agent's understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by organized sections for parameters and returns. Each sentence earns its place by providing essential information without redundancy, such as the parameter details and polling instructions, making it efficient and easy to parse.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (starting a stream with long-running operations), the description is complete enough: it explains the purpose, parameters, return behavior, and integration with 'get_operation' for polling. With annotations covering safety aspects and an output schema likely detailing the operation response, no critical gaps remain for agent usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds minimal extra semantics: it reiterates the 'name' parameter format with an example and explains the 'force' parameter's effect ('run without prior configuration verification'), but does not provide significant additional meaning beyond the schema's detailed descriptions, warranting a baseline score.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Starts an already created stream') and resource ('specified by the provided resource 'name' parameter'), distinguishing it from siblings like 'delete_stream' (destructive) or 'get_stream' (read-only). It precisely defines what the tool does without being vague or tautological.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool (to start an already created stream) and implicitly suggests an alternative ('get_operation' for polling the result). However, it does not explicitly state when NOT to use it (e.g., vs. creating a new stream, which isn't a sibling) or compare it to all relevant siblings like 'get_stream' for checking status, leaving some room for improvement.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Discussions

No comments yet. Be the first to start the discussion!

Try in Browser

Your Connectors

Sign in to create a connector for this server.

Resources