get_rule_alert_metrics
Analyze alert metrics by detection rule to identify trends and hotspots across all alert types, including system and detection errors, within a specified time period. Group data by custom intervals for detailed insights into security monitoring patterns.
Instructions
Gets alert metrics grouped by detection rule for ALL alert types, including alerts, detection errors, and system errors within a given time period. Use this tool to identify hot spots in alerts and use list_alerts for specific alert details.
Returns: Dict: - alerts_per_rule: List of series with entityId, label, and value - total_alerts: Total number of alerts in the period - start_date: Start date of the period - end_date: End date of the period - interval_in_minutes: Grouping interval for the metrics - rule_ids: List of rule IDs if provided
Permissions:{'all_of': ['Read Panther Metrics']}
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| end_date | No | Optional end date in ISO-8601 format. If provided, defaults to the end of the current day UTC. | |
| interval_in_minutes | No | Intervals for aggregating data points. Smaller intervals provide more granular detail of when events occurred, while larger intervals show broader trends but obscure the precise timing of incidents. | |
| rule_ids | No | A valid JSON list of Panther rule IDs to get metrics for | |
| start_date | No | Optional start date in ISO-8601 format. If provided, defaults to the start of the current day UTC. |
Implementation Reference
- Full implementation of the get_rule_alert_metrics tool handler. Includes @mcp_tool decorator for automatic registration, input schema with Pydantic Field and validators, and the core logic that queries Panther GraphQL for rule alert metrics, filters by rule_ids if provided, and returns formatted metrics data.@mcp_tool( annotations={ "permissions": all_perms(Permission.SUMMARY_READ), "readOnlyHint": True, } ) async def get_rule_alert_metrics( start_date: Annotated[ str | None, Field( description="Optional start date in ISO-8601 format. If provided, defaults to the start of the current day UTC.", examples=["2024-03-20T00:00:00Z"], ), ] = None, end_date: Annotated[ str | None, Field( description="Optional end date in ISO-8601 format. If provided, defaults to the end of the current day UTC.", examples=["2024-03-20T00:00:00Z"], ), ] = None, interval_in_minutes: Annotated[ int, BeforeValidator(_validate_interval), Field( description="Intervals for aggregating data points. Smaller intervals provide more granular detail of when events occurred, while larger intervals show broader trends but obscure the precise timing of incidents.", examples=[15, 30, 60, 180, 360, 720, 1440], ), ] = 15, rule_ids: Annotated[ list[str], BeforeValidator(_validate_rule_ids), Field( description="A valid JSON list of Panther rule IDs to get metrics for", examples=[["AppOmni.Alert.Passthrough", "Auth0.MFA.Policy.Disabled"]], ), ] = [], ) -> dict[str, Any]: """Gets alert metrics grouped by detection rule for ALL alert types, including alerts, detection errors, and system errors within a given time period. Use this tool to identify hot spots in alerts and use list_alerts for specific alert details. Returns: Dict: - alerts_per_rule: List of series with entityId, label, and value - total_alerts: Total number of alerts in the period - start_date: Start date of the period - end_date: End date of the period - interval_in_minutes: Grouping interval for the metrics - rule_ids: List of rule IDs if provided """ try: # If start or end date is missing, use week's date range if not start_date or not end_date: default_start_date, default_end_date = _get_week_date_range() if not start_date: start_date = default_start_date if not end_date: end_date = default_end_date logger.info(f"Fetching alerts per rule metrics from {start_date} to {end_date}") # Prepare variables variables = { "input": { "fromDate": start_date, "toDate": end_date, "intervalInMinutes": interval_in_minutes, } } # Execute query result = await _execute_query(METRICS_ALERTS_PER_RULE_QUERY, variables) if not result or "metrics" not in result: logger.error(f"Could not find key 'metrics' in result: {result}") raise Exception("Failed to fetch metrics data") metrics_data = result["metrics"] # Filter by rule IDs if provided if rule_ids: alerts_per_rule = [ item for item in metrics_data["alertsPerRule"] if item["entityId"] in rule_ids ] else: alerts_per_rule = metrics_data["alertsPerRule"] return { "success": True, "alerts_per_rule": alerts_per_rule, "total_alerts": len(alerts_per_rule), "start_date": start_date, "end_date": end_date, "interval_in_minutes": interval_in_minutes, "rule_ids": rule_ids if rule_ids else None, } except Exception as e: logger.error(f"Failed to fetch rule alert metrics: {str(e)}") return { "success": False, "message": f"Failed to fetch rule alert metrics: {str(e)}", }
- src/mcp_panther/server.py:75-79 (registration)Central registration point where all @mcp_tool decorated functions, including get_rule_alert_metrics, are registered with the FastMCP server instance.register_all_tools(mcp) # Register all prompts with MCP using the registry register_all_prompts(mcp) # Register all resources with MCP using the registry register_all_resources(mcp)
- Imports for the GraphQL query (METRICS_ALERTS_PER_RULE_QUERY) used in the handler and validators used for input schema validation.METRICS_ALERTS_PER_RULE_QUERY, METRICS_ALERTS_PER_SEVERITY_QUERY, METRICS_BYTES_PROCESSED_QUERY, ) from ..validators import ( _validate_alert_types, _validate_interval, _validate_rule_ids, _validate_severities, ) from .registry import mcp_tool