Skip to main content
Glama
brukhabtu

Datadog MCP Server

by brukhabtu

GetSpansMetric

Retrieve span-based metrics from your Datadog organization by specifying the metric ID. Analyze and monitor performance data with JSON responses for success, authorization, and error handling.

Instructions

Get a specific span-based metric from your organization.

Path Parameters:

  • metric_id (Required): The name of the span-based metric.

Responses:

  • 200 (Success): OK

    • Content-Type: application/json

    • Response Properties:

    • Example:

{
  "data": "unknown_type"
}
  • 403: Not Authorized

    • Content-Type: application/json

    • Response Properties:

      • errors: A list of errors.

    • Example:

{
  "errors": [
    "Bad Request"
  ]
}
  • 404: Not Found

    • Content-Type: application/json

    • Response Properties:

      • errors: A list of errors.

    • Example:

{
  "errors": [
    "Bad Request"
  ]
}
  • 429: Too many requests

    • Content-Type: application/json

    • Response Properties:

      • errors: A list of errors.

    • Example:

{
  "errors": [
    "Bad Request"
  ]
}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
metric_idYesThe name of the span-based metric.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
dataNo
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It includes HTTP response codes (e.g., 403, 404, 429) which hint at authorization, not-found, and rate-limiting behaviors, but does not explicitly state these traits (e.g., 'requires authentication,' 'may be rate-limited'). The description adds some context but falls short of fully transparent behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness2/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is overly verbose, including extensive HTTP response details that are redundant with typical API behavior and not front-loaded. The core purpose is stated upfront but buried under unnecessary technical specifications, reducing clarity and efficiency for an AI agent.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (implied by response examples) and 100% schema coverage, the description is moderately complete. It covers the basic operation and error cases but lacks depth in usage context, behavioral traits, and sibling differentiation, making it adequate but with clear gaps for effective tool selection.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 100%, with the parameter 'metric_id' documented as 'The name of the span-based metric.' The description repeats this in a 'Path Parameters' section but does not add meaning beyond the schema, such as examples of metric names or constraints. Baseline 3 is appropriate as the schema handles parameter documentation adequately.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states the tool 'Get a specific span-based metric from your organization,' which provides a clear verb ('Get') and resource ('span-based metric'). However, it does not differentiate from sibling tools like 'ListSpansMetrics' or 'GetLogsMetric,' leaving ambiguity about when to use this specific retrieval tool versus list operations.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description offers no guidance on when to use this tool versus alternatives, such as 'ListSpansMetrics' for browsing metrics or other 'Get' tools for different resource types. It lacks context about prerequisites, permissions, or typical use cases, leaving the agent without usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/brukhabtu/datadog-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server