Skip to main content
Glama
brukhabtu

Datadog MCP Server

by brukhabtu

ListSpansMetrics

Retrieve and manage configured span-based metrics with their definitions for enhanced observability on Datadog's platform using the MCP server.

Instructions

Get the list of configured span-based metrics with their definitions.

Responses:

  • 200 (Success): OK

    • Content-Type: application/json

    • Response Properties:

      • data: A list of span-based metric objects.

    • Example:

{
  "data": [
    "unknown_type"
  ]
}
  • 403: Not Authorized

    • Content-Type: application/json

    • Response Properties:

      • errors: A list of errors.

    • Example:

{
  "errors": [
    "Bad Request"
  ]
}
  • 429: Too many requests

    • Content-Type: application/json

    • Response Properties:

      • errors: A list of errors.

    • Example:

{
  "errors": [
    "Bad Request"
  ]
}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
dataNoA list of span-based metric objects.
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It mentions HTTP response codes (200, 403, 429) and examples, which adds some context about success and error conditions, but it fails to describe critical behavioral traits such as authentication requirements, rate limits, pagination, or whether the operation is read-only or has side effects. For a tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is structured with a clear purpose statement followed by detailed HTTP response documentation. However, it includes extensive examples and formatting that may be excessive for a tool description, potentially diluting the core information. While not overly verbose, it could be more streamlined by focusing on essential guidance rather than API-like documentation.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given that the tool has 0 parameters, 100% schema coverage, and an output schema exists (implied by context signals), the description is moderately complete. It covers the purpose and response formats but lacks behavioral details like authentication or usage context. With annotations absent, it should provide more operational guidance to be fully helpful for an agent.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% description coverage, so no parameter information is needed in the description. The description appropriately focuses on other aspects without repeating schema details, earning a high baseline score. It adds value by not cluttering with redundant parameter explanations.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get the list of configured span-based metrics with their definitions.' This specifies the verb ('Get'), resource ('span-based metrics'), and scope ('configured' with 'definitions'), making it easy to understand what the tool does. However, it doesn't explicitly differentiate from sibling tools like 'GetSpansMetric' or 'ListSpansGet', which might have overlapping or related functionality.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It lacks any mention of prerequisites, context, or comparisons to sibling tools such as 'GetSpansMetric' or 'ListSpansGet', leaving the agent without clear usage instructions. This omission reduces its effectiveness in tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/brukhabtu/datadog-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server