slos_update
Update service level objectives (SLOs) in Datadog to maintain performance monitoring standards and ensure service reliability.
Instructions
Update SLO
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Update service level objectives (SLOs) in Datadog to maintain performance monitoring standards and ensure service reliability.
Update SLO
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
With no annotations provided, the description carries the full burden of behavioral disclosure but fails completely. 'Update SLO' implies a mutation operation but doesn't specify whether this requires special permissions, what happens when an SLO is updated (e.g., historical data implications), whether the operation is idempotent, or what the response contains. For a mutation tool with zero annotation coverage, this represents a critical gap in behavioral transparency.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
While technically concise with only two words, this represents under-specification rather than effective conciseness. The description fails to provide necessary information about the tool's purpose and usage. Every word should earn its place, but here the minimal content fails to serve the required informational purpose.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given this is a mutation tool (updating SLOs) with no annotations, no output schema, and a completely inadequate description, the contextual completeness is severely lacking. The description doesn't explain what SLOs are, what can be updated, what the expected outcome is, or how this differs from similar tools. For a tool that presumably modifies important service level objectives, this level of documentation is dangerously incomplete.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The input schema has 0 parameters with 100% description coverage, so the schema fully documents the parameter situation (none required). The description doesn't need to compensate for any parameter documentation gaps. While it doesn't add any parameter-specific information beyond what the schema provides, the baseline for 0 parameters with high schema coverage is appropriately set at 4.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Update SLO' is a tautology that restates the tool name 'slos_update' without adding any meaningful clarification. It specifies the verb ('Update') and resource ('SLO'), but provides no details about what aspects of an SLO can be updated, how it differs from sibling tools like 'update_slo' (which appears to be a different tool), or what an SLO represents in this context.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
The description provides absolutely no guidance about when to use this tool versus alternatives. There are multiple sibling tools related to SLOs (create_slos, slos_get, slos_list, slos_delete, update_slo, etc.), but the description offers no differentiation. It doesn't mention prerequisites, required permissions, or appropriate contexts for invocation.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/ClaudioLazaro/mcp-datadog-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server