Skip to main content
Glama
panther-labs

Panther MCP Server

Official

list_alerts

Retrieve and filter alerts from Panther MCP Server by date range, severity, status, detection ID, log sources, resource types, and more. Customize pagination and search alert titles for efficient monitoring and investigation.

Instructions

List alerts from Panther with comprehensive filtering options

Args: start_date: Optional start date in ISO 8601 format (e.g. "2024-03-20T00:00:00Z") end_date: Optional end date in ISO 8601 format (e.g. "2024-03-21T00:00:00Z") severities: Optional list of severities to filter by (e.g. ["CRITICAL", "HIGH", "MEDIUM", "LOW", "INFO"]) statuses: Optional list of statuses to filter by (e.g. ["OPEN", "TRIAGED", "RESOLVED", "CLOSED"]) cursor: Optional cursor for pagination from a previous query detection_id: Optional detection ID to filter alerts by. If provided, date range is not required. event_count_max: Optional maximum number of events that returned alerts must have event_count_min: Optional minimum number of events that returned alerts must have log_sources: Optional list of log source IDs to filter alerts by log_types: Optional list of log type names to filter alerts by name_contains: Optional string to search for in alert titles page_size: Number of results per page (default: 25, maximum: 50) resource_types: Optional list of AWS resource type names to filter alerts by subtypes: Optional list of alert subtypes. Valid values depend on alert_type: - When alert_type="ALERT": ["POLICY", "RULE", "SCHEDULED_RULE"] - When alert_type="DETECTION_ERROR": ["RULE_ERROR", "SCHEDULED_RULE_ERROR"] - When alert_type="SYSTEM_ERROR": subtypes are not allowed alert_type: Type of alerts to return (default: "ALERT"). One of: - "ALERT": Regular detection alerts - "DETECTION_ERROR": Alerts from detection errors - "SYSTEM_ERROR": System error alerts

Permissions:{'all_of': ['Read Alerts']}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
alert_typeNoType of alerts to returnALERT
cursorNoOptional cursor for pagination returned from a previous call
detection_idNoOptional detection ID to filter alerts by; if provided, the date range is not required
end_dateNoOptional end date in ISO-8601 format. If provided, defaults to the end of the current day UTC.
event_count_maxNoOptional maximum number of events an alert may contain
event_count_minNoOptional minimum number of events an alert must contain
log_sourcesNoOptional list of log‑source IDs to filter alerts by
log_typesNoOptional list of log‑type names to filter alerts by
name_containsNoOptional substring to match within alert titles
page_sizeNoNumber of results per page (max 50, default 25)
resource_typesNoOptional list of AWS resource‑type names to filter alerts by
severitiesNoOptional list of severities to filter by
start_dateNoOptional start date in ISO-8601 format. If provided, defaults to the start of the current day UTC.
statusesNoOptional list of statuses to filter by
subtypesNoOptional list of alert subtypes (valid values depend on alert_type)

Implementation Reference

  • The core handler for the 'list_alerts' tool. Decorated with @mcp_tool for automatic registration. Implements comprehensive filtering, pagination, input validation using Pydantic Annotated fields and custom validators, queries Panther's REST API /alerts endpoint, and returns formatted paginated results with success status.
    @mcp_tool( annotations={ "permissions": all_perms(Permission.ALERT_READ), "readOnlyHint": True, } ) async def list_alerts( start_date: Annotated[ str | None, BeforeValidator(_validate_iso_date), Field( description="Optional start date in ISO-8601 format. If provided, defaults to the start of the current day UTC.", examples=["2024-03-20T00:00:00Z"], ), ] = None, end_date: Annotated[ str | None, BeforeValidator(_validate_iso_date), Field( description="Optional end date in ISO-8601 format. If provided, defaults to the end of the current day UTC.", examples=["2024-03-20T00:00:00Z"], ), ] = None, severities: Annotated[ list[str], BeforeValidator(_validate_severities), Field( description="Optional list of severities to filter by", examples=[["CRITICAL", "HIGH", "MEDIUM", "LOW", "INFO"]], ), ] = [], statuses: Annotated[ list[str], BeforeValidator(_validate_statuses), Field( description="Optional list of statuses to filter by", examples=[ ["OPEN", "TRIAGED", "RESOLVED", "CLOSED"], ["RESOLVED", "CLOSED"], ["OPEN", "TRIAGED"], ], ), ] = [], cursor: Annotated[ str | None, Field( min_length=1, description="Optional cursor for pagination returned from a previous call", ), ] = None, detection_id: Annotated[ str | None, Field( min_length=1, description="Optional detection ID to filter alerts by; if not provided, default date range (7days) is applied", ), ] = None, event_count_max: Annotated[ int | None, Field( ge=1, description="Optional maximum number of events an alert may contain" ), ] = None, event_count_min: Annotated[ int, Field( ge=1, description="Optional minimum number of events an alert must contain" ), ] = 1, log_sources: Annotated[ list[str], Field(description="Optional list of log‑source IDs to filter alerts by"), ] = [], log_types: Annotated[ list[str], Field(description="Optional list of log‑type names to filter alerts by"), ] = [], name_contains: Annotated[ str | None, Field( min_length=1, description="Optional substring to match within alert titles" ), ] = None, page_size: Annotated[ int, Field( description="Number of results per page (max 50, default 25)", ge=1, le=50, ), ] = 25, resource_types: Annotated[ list[str], Field( description="Optional list of AWS resource‑type names to filter alerts by" ), ] = [], subtypes: Annotated[ list[str], BeforeValidator(_validate_subtypes), Field( description="Optional list of alert subtypes (valid values depend on alert_type)", examples=[ ["RULE"], # Python rules only ["SCHEDULED_RULE"], # Scheduled queries only ["POLICY"], # Cloud policies only ["RULE", "SCHEDULED_RULE"], # Both rule types (when alert_type=ALERT) [ "RULE_ERROR", "SCHEDULED_RULE_ERROR", ], # When alert_type=DETECTION_ERROR ], ), ] = [], alert_type: Annotated[ str, BeforeValidator(_validate_alert_api_types), Field( description="Type of alerts to return", examples=["ALERT", "DETECTION_ERROR", "SYSTEM_ERROR"], ), ] = "ALERT", ) -> dict[str, Any]: """List alerts from Panther with comprehensive filtering options Args: start_date: Optional start date in ISO 8601 format (e.g. "2024-03-20T00:00:00Z") end_date: Optional end date in ISO 8601 format (e.g. "2024-03-21T00:00:00Z") severities: Optional list of severities to filter by (e.g. ["CRITICAL", "HIGH", "MEDIUM", "LOW", "INFO"]) statuses: Optional list of statuses to filter by (e.g. ["OPEN", "TRIAGED", "RESOLVED", "CLOSED"]) cursor: Optional cursor for pagination from a previous query detection_id: Optional detection ID to filter alerts by. If not provided, default date range (7days) is applied. event_count_max: Optional maximum number of events that returned alerts must have event_count_min: Optional minimum number of events that returned alerts must have log_sources: Optional list of log source IDs to filter alerts by log_types: Optional list of log type names to filter alerts by name_contains: Optional string to search for in alert titles page_size: Number of results per page (default: 25, maximum: 50) resource_types: Optional list of AWS resource type names to filter alerts by subtypes: Optional list of alert subtypes. Valid values depend on alert_type: - When alert_type="ALERT": ["POLICY", "RULE", "SCHEDULED_RULE"] - When alert_type="DETECTION_ERROR": ["RULE_ERROR", "SCHEDULED_RULE_ERROR"] - When alert_type="SYSTEM_ERROR": subtypes are not allowed alert_type: Type of alerts to return (default: "ALERT"). One of: - "ALERT": Regular detection alerts - "DETECTION_ERROR": Alerts from detection errors - "SYSTEM_ERROR": System error alerts """ logger.info("Fetching alerts from Panther") try: # Validate page size if page_size < 1: raise ValueError("page_size must be greater than 0") if page_size > 50: logger.warning( f"page_size {page_size} exceeds maximum of 50, using 50 instead" ) page_size = 50 # Validate alert_type and subtypes combination valid_alert_types = ["ALERT", "DETECTION_ERROR", "SYSTEM_ERROR"] if alert_type not in valid_alert_types: raise ValueError(f"alert_type must be one of {valid_alert_types}") if subtypes: valid_subtypes = { "ALERT": ["POLICY", "RULE", "SCHEDULED_RULE"], "DETECTION_ERROR": ["RULE_ERROR", "SCHEDULED_RULE_ERROR"], "SYSTEM_ERROR": [], } if alert_type == "SYSTEM_ERROR": raise ValueError( "subtypes are not allowed when alert_type is SYSTEM_ERROR" ) allowed_subtypes = valid_subtypes[alert_type] invalid_subtypes = [st for st in subtypes if st not in allowed_subtypes] if invalid_subtypes: raise ValueError( f"Invalid subtypes {invalid_subtypes} for alert_type={alert_type}. " f"Valid subtypes are: {allowed_subtypes}" ) # Prepare query parameters params = { "type": alert_type, "limit": page_size, "sort-dir": "desc", } # Handle the required filter: either detection-id OR date range if detection_id: params["detection-id"] = detection_id logger.info(f"Filtering by detection ID: {detection_id}") # Add a default date filter (7days) if no detection_id if not detection_id and not (start_date or end_date): start_date, end_date = _get_week_date_range() if start_date: params["created-after"] = start_date if end_date: params["created-before"] = end_date # Add optional filters if cursor: if not isinstance(cursor, str): raise ValueError( "Cursor must be a string value from previous response's next" ) params["cursor"] = cursor logger.info(f"Using cursor for pagination: {cursor}") if severities: params["severity"] = severities logger.info(f"Filtering by severities: {severities}") if statuses: params["status"] = statuses logger.info(f"Filtering by statuses: {statuses}") if event_count_max is not None: params["event-count-max"] = event_count_max logger.info(f"Filtering by max event count: {event_count_max}") if event_count_min is not None: params["event-count-min"] = event_count_min logger.info(f"Filtering by min event count: {event_count_min}") if log_sources: params["log-source"] = log_sources logger.info(f"Filtering by log sources: {log_sources}") if log_types: params["log-type"] = log_types logger.info(f"Filtering by log types: {log_types}") if name_contains: params["name-contains"] = name_contains logger.info(f"Filtering by name contains: {name_contains}") if resource_types: params["resource-type"] = resource_types logger.info(f"Filtering by resource types: {resource_types}") if subtypes: params["sub-type"] = subtypes logger.info(f"Filtering by subtypes: {subtypes}") logger.debug(f"Query parameters: {params}") # Execute the REST API call async with get_rest_client() as client: result, status = await client.get( "/alerts", params=params, expected_codes=[200, 400] ) if status == 400: logger.error("Bad request when fetching alerts") return { "success": False, "message": "Bad request when fetching alerts", } # Log the raw result for debugging logger.debug(f"Raw API result: {result}") # Process results alerts = result.get("results", []) next_cursor = result.get("next") logger.info(f"Successfully retrieved {len(alerts)} alerts") # Format the response return { "success": True, "alerts": alerts, "total_alerts": len(alerts), "has_next_page": next_cursor is not None, "has_previous_page": cursor is not None, "end_cursor": next_cursor, "start_cursor": cursor, } except Exception as e: logger.error(f"Failed to fetch alerts: {str(e)}") return {"success": False, "message": f"Failed to fetch alerts: {str(e)}"}
  • The `register_all_tools` function that scans for all @mcp_tool decorated functions (including list_alerts) and registers them with the MCP server instance using metadata from the decorator.
    def register_all_tools(mcp_instance) -> None: """ Register all tools marked with @mcp_tool with the given MCP instance. Args: mcp_instance: The FastMCP instance to register tools with """ logger.info(f"Registering {len(_tool_registry)} tools with MCP") # Sort tools by name sorted_funcs = sorted(_tool_registry, key=lambda f: f.__name__) for tool in sorted_funcs: logger.debug(f"Registering tool: {tool.__name__}") # Get tool metadata if it exists metadata = getattr(tool, "_mcp_tool_metadata", {}) annotations = metadata.get("annotations", {}) # Create tool decorator with metadata tool_decorator = mcp_instance.tool( name=metadata.get("name"), description=metadata.get("description"), annotations=annotations, ) if annotations and annotations.get("permissions"): if not tool.__doc__: tool.__doc__ = "" tool.__doc__ += f"\n\n Permissions:{annotations.get('permissions')}" # Register the tool tool_decorator(tool) logger.info("All tools registered successfully")
  • The `@mcp_tool` decorator definition used on list_alerts to mark it for auto-registration and attach metadata like permissions.
    def mcp_tool( func: Optional[Callable] = None, *, name: Optional[str] = None, description: Optional[str] = None, annotations: Optional[Dict[str, Any]] = None, ) -> Callable: """ Decorator to mark a function as an MCP tool. Functions decorated with this will be automatically registered when register_all_tools() is called. Can be used in two ways: 1. Direct decoration: @mcp_tool def my_tool(): ... 2. With parameters: @mcp_tool( name="custom_name", description="Custom description", annotations={"category": "data_analysis"} ) def my_tool(): ... Args: func: The function to decorate name: Optional custom name for the tool. If not provided, uses the function name. description: Optional description of what the tool does. If not provided, uses the function's docstring. annotations: Optional dictionary of additional annotations for the tool. """ def decorator(func: Callable) -> Callable: # Store metadata on the function func._mcp_tool_metadata = { "name": name, "description": description, "annotations": annotations, } _tool_registry.add(func) @wraps(func) def wrapper(*args, **kwargs): return func(*args, **kwargs) return wrapper # Handle both @mcp_tool and @mcp_tool(...) cases if func is None: return decorator return decorator(func)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/panther-labs/mcp-panther'

If you have feedback or need assistance with the MCP directory API, please join our Discord server