Skip to main content
Glama
ClaudioLazaro

MCP Datadog Server

search_security_monitoring_signals

Search and retrieve security monitoring signals from Datadog based on specific queries to identify and investigate potential security threats.

Instructions

Returns security signals that match a search query. Both this endpoint and the GET endpoint can be used interchangeably for listing security signals.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It states the tool returns matching security signals but does not describe any behavioral traits such as pagination, rate limits, authentication requirements, or what constitutes a 'search query' (e.g., syntax, filters). For a search tool with zero annotation coverage, this leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences long and front-loaded with the core purpose. The first sentence directly states what the tool does, and the second adds relevant context about interchangeability. There is no wasted verbiage, but the second sentence could be slightly more precise to enhance clarity.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (a search function with no parameters), the description is minimally adequate. It explains the purpose and hints at an alternative, but lacks details on behavioral aspects (e.g., output format, error handling) since no annotations or output schema are provided. For a search tool, more context on what 'security signals' entail or how results are structured would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters (properties: {}, type: object), and schema description coverage is 100%. With no parameters, the description does not need to add parameter semantics. The baseline for 0 parameters is 4, as the schema fully documents the absence of parameters, and the description appropriately does not introduce unnecessary details.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Returns security signals that match a search query.' This specifies the verb ('returns'), resource ('security signals'), and action ('search query'). However, it does not distinguish this tool from potential siblings like 'get_security_monitoring_signals' or other search tools in the list, which would require explicit differentiation for a score of 5.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides some usage context by mentioning that 'Both this endpoint and the GET endpoint can be used interchangeably for listing security signals.' This implies an alternative (a GET endpoint) but does not explicitly state when to choose this tool over the GET endpoint or other search tools. No exclusions or specific scenarios are provided, making the guidance incomplete.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ClaudioLazaro/mcp-datadog-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server