Skip to main content
Glama
panther-labs

Panther MCP Server

Official

start_ai_alert_triage

Read-only

Initiate AI-powered analysis of security alerts to assess risk, analyze events, and provide investigation recommendations.

Instructions

Start an AI-powered triage analysis for a Panther alert with intelligent insights and recommendations.

This tool initiates Panther's embedded AI agent to triage an alert and provide an intelligent report about the events, risk level, potential impact, and recommended next steps for investigation.

The AI triage includes analysis of:

  • Alert metadata (severity, detection rule, timestamps)

  • Related events and logs (if available)

  • Comments from previous investigations

  • Contextual security analysis and recommendations

Returns: Dict containing: - success: Boolean indicating if triage was generated successfully - summary: The AI-generated triage summary text - stream_id: The stream ID used for this analysis - metadata: Information about the analysis request - message: Error message if unsuccessful

Permissions:{'all_of': ['Run Panther AI']}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
alert_idYesThe ID of the alert to start AI triage for
promptNoOptional additional prompt to provide context for the AI triage
timeout_secondsNoTimeout in seconds to wait for AI triage completion

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The @mcp_tool decorator that registers the 'start_ai_alert_triage' tool, specifying permissions (RUN_PANTHER_AI) and readOnlyHint.
    @mcp_tool(
        annotations={
            "permissions": all_perms(Permission.RUN_PANTHER_AI),
            "readOnlyHint": True,
        }
    )
  • Input schema defined via Annotated Pydantic fields: alert_id (str, required), prompt (str | None, optional), timeout_seconds (int, default=180, ge=120 le=300).
    async def start_ai_alert_triage(
        alert_id: Annotated[
            str,
            Field(min_length=1, description="The ID of the alert to start AI triage for"),
        ],
        prompt: Annotated[
            str | None,
            Field(
                min_length=1,
                description="Optional additional prompt to provide context for the AI triage",
            ),
        ] = None,
        timeout_seconds: Annotated[
            int,
            Field(
                description="Timeout in seconds to wait for AI triage completion",
                ge=120,
                le=300,
            ),
        ] = 180,
    ) -> dict[str, Any]:
  • The core handler logic: constructs GraphQL variables for AI_SUMMARIZE_ALERT_MUTATION to start triage stream, polls AI_INFERENCE_STREAM_QUERY until finished or timeout, accumulates responseText, handles errors and returns summary with metadata.
    """Start an AI-powered triage analysis for a Panther alert with intelligent insights and recommendations.
    
    This tool initiates Panther's embedded AI agent to triage an alert and provide
    an intelligent report about the events, risk level, potential impact, and
    recommended next steps for investigation.
    
    The AI triage includes analysis of:
    - Alert metadata (severity, detection rule, timestamps)
    - Related events and logs (if available)
    - Comments from previous investigations
    - Contextual security analysis and recommendations
    
    Returns:
        Dict containing:
        - success: Boolean indicating if triage was generated successfully
        - summary: The AI-generated triage summary text
        - stream_id: The stream ID used for this analysis
        - metadata: Information about the analysis request
        - message: Error message if unsuccessful
    """
    logger.info(f"Starting AI triage for alert {alert_id}")
    
    try:
        # Set output length to medium (fixed, not configurable by AI)
        output_length = "medium"
    
        # Prepare the AI summarize request with minimal required fields
        request_input = {
            "alertId": alert_id,
            "outputLength": output_length,
            "metadata": {
                "kind": "alert"  # Required: tells AI this is an alert analysis
            },
        }
    
        # Add optional prompt if provided
        if prompt:
            request_input["prompt"] = prompt
    
        variables = {"input": request_input}
    
        if prompt:
            logger.info(f"Using additional prompt: {prompt[:100]}...")
    
        logger.info(f"Initiating AI triage with output_length={output_length}")
    
        # Step 1: Start the AI triage
        result = await _execute_query(AI_SUMMARIZE_ALERT_MUTATION, variables)
    
        if not result or "aiSummarizeAlert" not in result:
            logger.error("Failed to initiate AI triage")
            return {
                "success": False,
                "message": "Failed to initiate AI triage",
            }
    
        stream_id = result["aiSummarizeAlert"]["streamId"]
        logger.info(f"AI triage started with stream ID: {stream_id}")
    
        # Step 2: Poll for results with timeout
        start_time = asyncio.get_event_loop().time()
        poll_interval = 2.0  # Start with 2 second intervals
        max_poll_interval = 10.0  # Maximum 10 second intervals
    
        accumulated_response = ""
    
        while True:
            current_time = asyncio.get_event_loop().time()
            elapsed = current_time - start_time
    
            if elapsed > timeout_seconds:
                logger.warning(f"AI triage timed out after {timeout_seconds} seconds")
                return {
                    "success": False,
                    "message": f"AI triage generation timed out after {timeout_seconds} seconds",
                    "stream_id": stream_id,
                    "partial_summary": accumulated_response
                    if accumulated_response
                    else None,
                }
    
            # Poll the inference stream
            stream_variables = {"streamId": stream_id}
            stream_result = await _execute_query(
                AI_INFERENCE_STREAM_QUERY, stream_variables
            )
    
            if not stream_result or "aiInferenceStream" not in stream_result:
                logger.error("Failed to poll AI inference stream")
                await asyncio.sleep(poll_interval)
                continue
    
            inference_data = stream_result["aiInferenceStream"]
    
            # Check for errors
            if inference_data.get("error"):
                logger.error(f"AI inference error: {inference_data['error']}")
                return {
                    "success": False,
                    "message": f"AI inference failed: {inference_data['error']}",
                    "stream_id": stream_id,
                }
    
            # Get the latest response text (AI streams the full response each time)
            response_text = inference_data.get("responseText", "")
            if response_text:
                accumulated_response = response_text
    
            # Check if finished
            if inference_data.get("finished", False):
                logger.info(f"AI triage completed after {elapsed:.1f} seconds")
                break
    
            # Wait before next poll, with exponential backoff
            await asyncio.sleep(poll_interval)
            poll_interval = min(poll_interval * 1.2, max_poll_interval)
    
        # Return the completed triage
        return {
            "success": True,
            "summary": accumulated_response,
            "stream_id": stream_id,
            "metadata": {
                "alert_id": alert_id,
                "output_length": output_length,
                "generation_time_seconds": round(elapsed, 1),
                "prompt_included": prompt is not None,
            },
        }
    
    except Exception as e:
        logger.error(f"Failed to start AI alert triage: {str(e)}")
        return {
            "success": False,
            "message": f"Failed to start AI alert triage: {str(e)}",
        }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating this is a safe operation. The description adds behavioral context beyond annotations by detailing what the AI triage includes (e.g., analysis of alert metadata, related events, comments) and specifying a timeout parameter with default/max values. However, it does not mention rate limits, authentication needs, or potential side effects like resource consumption, which would enhance transparency further.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded with the core purpose, followed by details on analysis components and return values. It avoids unnecessary fluff, but the 'Returns' section could be more concise by referencing the output schema instead of listing fields. Overall, most sentences earn their place, though slight trimming is possible.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (AI triage initiation), the description is complete: it explains the purpose, analysis scope, return structure, and permissions. With annotations (readOnlyHint), a rich input schema (100% coverage), and an output schema (implied by the Returns section), no critical gaps remain. The description effectively complements the structured data without redundancy.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters (alert_id, prompt, timeout_seconds) thoroughly. The description does not add significant meaning beyond the schema, such as explaining how the prompt influences AI behavior or typical timeout scenarios. With high schema coverage, the baseline score of 3 is appropriate as the description provides minimal extra parameter insight.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Start an AI-powered triage analysis for a Panther alert with intelligent insights and recommendations.' It specifies the action ('start'), resource ('Panther alert'), and scope ('AI-powered triage analysis'), distinguishing it from sibling tools like get_ai_alert_triage_summary (which retrieves results) or get_alert (which fetches basic alert data).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by mentioning it initiates AI triage for alerts, but it does not explicitly state when to use this tool versus alternatives like get_ai_alert_triage_summary (which likely retrieves existing triage results). The permissions field ('Run Panther AI') provides some guidance on prerequisites, but no explicit when-not-to-use or alternative tool references are included.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/panther-labs/mcp-panther'

If you have feedback or need assistance with the MCP directory API, please join our Discord server