Skip to main content
Glama
brukhabtu

Datadog MCP Server

by brukhabtu

GetUsageObservabilityPipelines

Retrieve hourly usage data for observability pipelines using specified start and end timestamps in ISO-8601 format, UTC, to monitor and analyze pipeline activity.

Instructions

Get hourly usage for observability pipelines. Note: This endpoint has been deprecated. Hourly usage data for all products is now available in the Get hourly usage by product family API

Query Parameters:

  • start_hr (Required): Datetime in ISO-8601 format, UTC, precise to hour: [YYYY-MM-DDThh] for usage beginning at this hour.

  • end_hr: Datetime in ISO-8601 format, UTC, precise to hour: [YYYY-MM-DDThh] for usage ending before this hour.

Responses:

  • 200 (Success): OK

    • Content-Type: application/json;datetime-format=rfc3339

    • Response Properties:

      • data: Response containing Observability Pipelines usage.

    • Example:

{
  "data": [
    "unknown_type"
  ]
}
  • 400: Bad Request

    • Content-Type: application/json;datetime-format=rfc3339

    • Response Properties:

      • errors: A list of errors.

    • Example:

{
  "errors": [
    "Bad Request"
  ]
}
  • 403: Forbidden - User is not authorized

    • Content-Type: application/json;datetime-format=rfc3339

    • Response Properties:

      • errors: A list of errors.

    • Example:

{
  "errors": [
    "Bad Request"
  ]
}
  • 429: Too many requests

    • Content-Type: application/json;datetime-format=rfc3339

    • Response Properties:

      • errors: A list of errors.

    • Example:

{
  "errors": [
    "Bad Request"
  ]
}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
end_hrNoDatetime in ISO-8601 format, UTC, precise to hour: `[YYYY-MM-DDThh]` for usage ending **before** this hour.
start_hrYesDatetime in ISO-8601 format, UTC, precise to hour: `[YYYY-MM-DDThh]` for usage beginning at this hour.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
dataNoResponse containing Observability Pipelines usage.
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions the deprecation status, which is crucial context, and includes HTTP response codes (200, 400, 403, 429) with examples, adding transparency about success, errors, authorization, and rate limits. However, it lacks details on authentication requirements, pagination, or data format specifics beyond the examples, leaving some behavioral aspects unclear.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is overly verbose and poorly structured for an AI agent. It includes extensive HTTP response details (e.g., status codes, content types, examples) that are redundant with typical API conventions and could be inferred from annotations or output schemas. The front-loaded deprecation note is useful, but the subsequent sections are bloated with information that doesn't efficiently aid tool selection or invocation, reducing conciseness.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a deprecated read operation with two parameters), the description is mostly complete. It covers the deprecation context, parameter details (though redundant with schema), and response behaviors. Since an output schema exists (implied by 'Has output schema: true'), the description doesn't need to explain return values in depth. However, the excessive detail in responses slightly detracts from focus, but overall, it provides sufficient context for agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, fully documenting the two parameters (start_hr and end_hr) with format details. The description repeats this parameter information verbatim in the 'Query Parameters' section, adding no new semantic value beyond what the schema provides. According to the rules, when schema coverage is high (>80%), the baseline score is 3, as the description does not compensate with additional insights.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get hourly usage for observability pipelines.' It specifies the verb ('Get') and resource ('hourly usage for observability pipelines'), making it easy to understand what the tool does. However, it does not explicitly distinguish this tool from sibling tools like 'GetHourlyUsage' or other usage-related tools, which slightly reduces clarity.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance by noting that the endpoint is deprecated and directing users to an alternative API: 'Hourly usage data for all products is now available in the [Get hourly usage by product family API].' This clearly indicates when not to use this tool and offers a specific alternative, which is optimal for agent decision-making.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/brukhabtu/datadog-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server