query_resources
Retrieve timeseries data points from Datadog for monitoring and analysis purposes using natural language queries.
Instructions
Query timeseries points.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Retrieve timeseries data points from Datadog for monitoring and analysis purposes using natural language queries.
Query timeseries points.
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?
No annotations are provided, so the description carries full burden. It only states the action ('query') without disclosing behavioral traits like whether it's read-only, requires authentication, has rate limits, returns paginated results, or what format the output takes. This is inadequate for a tool with zero annotation coverage.
Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.
Is the description appropriately sized, front-loaded, and free of redundancy?
The description is a single, efficient sentence with no wasted words. It's front-loaded and appropriately sized for the minimal information it conveys, earning full marks for conciseness.
Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.
Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?
Given the tool's complexity (querying timeseries data), lack of annotations, no output schema, and many sibling query tools, the description is severely incomplete. It fails to explain what 'timeseries points' are, how results are returned, or any behavioral context, making it inadequate for effective use.
Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.
Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?
The tool has 0 parameters with 100% schema description coverage, so the schema fully documents the lack of inputs. The description doesn't need to add parameter semantics, and it doesn't contradict the schema. A baseline of 4 is appropriate as the description doesn't mislead about parameters.
Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.
Does the description clearly state what the tool does and how it differs from similar tools?
The description 'Query timeseries points' states a verb ('query') and resource ('timeseries points'), but it's vague about what specific timeseries points are being queried and lacks differentiation from sibling tools like 'query_scalars', 'query_timeseries', 'metrics_query_timeseries', or 'search_resources'. It doesn't specify scope or constraints, making it minimally informative.
Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.
Does the description explain when to use this tool, when not to, or what alternatives exist?
No guidance is provided on when to use this tool versus alternatives. With many sibling tools for querying data (e.g., 'query_scalars', 'metrics_query_timeseries', 'search_resources'), the description offers no context, prerequisites, or distinctions, leaving the agent without usage direction.
Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.
We provide all the information about MCP servers via our MCP API.
curl -X GET 'https://glama.ai/api/mcp/v1/servers/ClaudioLazaro/mcp-datadog-server'
If you have feedback or need assistance with the MCP directory API, please join our Discord server