Skip to main content
Glama

get_cluster_log

Retrieve scaling and activity log events for an Ocean cluster on AWS or Azure, filtering by date range, severity, and limit to troubleshoot cluster operations.

Instructions

Get scaling and activity log events for an Ocean cluster (AWS or Azure).

Args: cluster_id: The Ocean cluster ID (e.g. o-abc12345) from_date: Start date in YYYY-MM-DD format (e.g. 2026-03-19) to_date: End date in YYYY-MM-DD format (e.g. 2026-03-20) severity: Filter by severity: ALL, INFO, WARN, ERROR (default: ALL) limit: Max number of log entries (default: 500) account_id: Optional account ID to query. Defaults to SPOTINST_ACCOUNT_ID env var. cloud: Cloud provider: aws or azure (default: aws)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cluster_idYes
from_dateYes
to_dateYes
severityNoALL
limitNo
account_idNo
cloudNoaws

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description bears full responsibility for disclosing behavior. It does not mention error handling, rate limits, data ordering, or any side effects. The tool returns log events, but the description omits whether the output is paginated or if the request is read-only. With no behavioral disclosure beyond the basic function, this is a notable gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is concise: a single sentence defining the tool's purpose followed by a structured parameter list in a standard docstring format. Every line is informative, no fluff. Information density is high, and the most important detail (what the tool does) is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

The output schema exists, so return values are documented elsewhere. However, the description lacks context on ordering of results, error expectations, or how to handle large result sets (e.g., pagination with limit). For a log retrieval tool with 7 parameters and no annotations, this is moderately complete but missing some operational nuance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0% description coverage, so the description must compensate. It adds explanations for all 7 parameters: cluster_id examples, date format, severity options, limit default, account_id fallback, and cloud default. This enriches the schema significantly. The descriptions provide meaningful context that helps an agent format parameters correctly.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool retrieves 'scaling and activity log events for an Ocean cluster (AWS or Azure)'. The verb 'Get' and specific resource 'log events' make the purpose unambiguous. Among sibling tools (e.g., get_cluster, get_cluster_nodes), this tool is uniquely for logs, so it is well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when needing log events but does not explicitly state when to use this tool over alternatives like get_cluster or get_cluster_health. There is no 'when not to use' guidance or mention of prerequisites. The parameter descriptions hint at common use cases but lack explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/arnstarn/mcp-server-spotinst'

If you have feedback or need assistance with the MCP directory API, please join our Discord server