Skip to main content
Glama
panther-labs

Panther MCP Server

Official

get_ai_alert_triage_summary

Read-only

Retrieve AI-generated triage analysis for security alerts to understand incident context and prioritize response actions.

Instructions

Retrieve the latest AI triage summary for a specific Panther alert.

This tool retrieves the most recently generated AI triage analysis for an alert. It fetches the list of AI inference stream IDs associated with the alert, then retrieves the response text for the latest stream.

Returns: Dict containing: - success: Boolean indicating if retrieval was successful - summary: The latest AI triage summary containing: - stream_id: The unique stream identifier - response_text: The AI-generated triage summary - finished: Whether the triage generation completed - error: Any error message if present - message: Error message if unsuccessful

Permissions:{'all_of': ['Run Panther AI']}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
alert_idYesThe ID of the alert to retrieve the latest AI triage summary for

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core handler function that implements the get_ai_alert_triage_summary tool. It fetches the latest AI-generated triage summary for a given alert ID by querying Panther's GraphQL API for AI inference streams metadata and then retrieving the response from the most recent stream.
    async def get_ai_alert_triage_summary(
        alert_id: Annotated[
            str,
            Field(
                min_length=1,
                description="The ID of the alert to retrieve the latest AI triage summary for",
            ),
        ],
    ) -> dict[str, Any]:
        """Retrieve the latest AI triage summary for a specific Panther alert.
    
        This tool retrieves the most recently generated AI triage analysis for an alert.
        It fetches the list of AI inference stream IDs associated with the alert,
        then retrieves the response text for the latest stream.
    
        Returns:
            Dict containing:
            - success: Boolean indicating if retrieval was successful
            - summary: The latest AI triage summary containing:
                - stream_id: The unique stream identifier
                - response_text: The AI-generated triage summary
                - finished: Whether the triage generation completed
                - error: Any error message if present
            - message: Error message if unsuccessful
        """
        logger.info(f"Retrieving latest AI triage summary for alert {alert_id}")
    
        try:
            # Step 1: Get all stream IDs for this alert
            metadata_variables = {"input": {"alias": alert_id}}
            metadata_result = await _execute_query(
                AI_INFERENCE_STREAMS_METADATA_QUERY, metadata_variables
            )
    
            if not metadata_result or "aiInferenceStreamsMetadata" not in metadata_result:
                logger.error("Failed to retrieve AI inference streams metadata")
                return {
                    "success": False,
                    "message": "Failed to retrieve AI inference streams metadata",
                }
    
            edges = metadata_result["aiInferenceStreamsMetadata"].get("edges", [])
            stream_ids = [edge["node"]["streamId"] for edge in edges]
    
            if not stream_ids:
                logger.info(f"No AI triage summary found for alert {alert_id}")
                return {
                    "success": False,
                    "message": "No AI triage summary found for this alert",
                }
    
            # Get the latest stream ID (last in the list)
            latest_stream_id = stream_ids[-1]
            logger.info(
                f"Found {len(stream_ids)} AI triage summary/summaries, retrieving latest: {latest_stream_id}"
            )
    
            # Step 2: Fetch response text for the latest stream ID
            stream_variables = {"streamId": latest_stream_id}
            stream_result = await _execute_query(
                AI_INFERENCE_STREAM_QUERY, stream_variables
            )
    
            if not stream_result or "aiInferenceStream" not in stream_result:
                logger.error(f"Failed to retrieve stream {latest_stream_id}")
                return {
                    "success": False,
                    "message": f"Failed to retrieve stream data for {latest_stream_id}",
                }
    
            inference_data = stream_result["aiInferenceStream"]
            response_text = inference_data.get("responseText", "")
            error = inference_data.get("error")
    
            summary = {
                "stream_id": latest_stream_id,
                "response_text": response_text,
                "finished": inference_data.get("finished", False),
                "error": error,
            }
    
            logger.info(
                f"Successfully retrieved latest AI triage summary for alert {alert_id}"
            )
    
            return {
                "success": True,
                "summary": summary,
            }
    
        except Exception as e:
            logger.error(f"Failed to retrieve AI alert triage summaries: {str(e)}")
            return {
                "success": False,
                "message": f"Failed to retrieve AI alert triage summaries: {str(e)}",
            }
  • The @mcp_tool decorator that registers the get_ai_alert_triage_summary function as an MCP tool, specifying required permissions and read-only nature.
    @mcp_tool(
        annotations={
            "permissions": all_perms(Permission.RUN_PANTHER_AI),
            "readOnlyHint": True,
        }
    )
  • Pydantic schema definition for the tool's single input parameter: alert_id (string, required).
        alert_id: Annotated[
            str,
            Field(
                min_length=1,
                description="The ID of the alert to retrieve the latest AI triage summary for",
            ),
        ],
    ) -> dict[str, Any]:
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating a safe read operation. The description adds valuable behavioral context beyond this: it explains the internal process (fetches stream IDs, retrieves latest response text), discloses permission requirements ('Run Panther AI'), and details the return structure. This enriches understanding without contradicting annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, with the core purpose stated first. However, the 'Returns:' section is somewhat redundant given the output schema, and the permission note could be integrated more smoothly, slightly reducing efficiency.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (involves multi-step retrieval), rich annotations (readOnlyHint), and the presence of an output schema, the description is complete. It covers the purpose, process, permissions, and return values, providing sufficient context for an agent to use it effectively without over-explaining structured data.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'alert_id' well-documented in the schema. The description adds no additional parameter semantics beyond what the schema provides (e.g., format examples or constraints), so it meets the baseline of 3 for high schema coverage without extra value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('retrieve'), resource ('latest AI triage summary'), and scope ('for a specific Panther alert'). It distinguishes this tool from siblings like 'get_alert' (which retrieves general alert details) and 'start_ai_alert_triage' (which initiates triage generation), making the purpose unambiguous and well-differentiated.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context by specifying it retrieves 'the most recently generated AI triage analysis for an alert,' suggesting it should be used after triage has been initiated. However, it doesn't explicitly state when NOT to use it (e.g., if no triage exists) or name alternatives like 'start_ai_alert_triage' for generating triage, leaving some guidance gaps.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/panther-labs/mcp-panther'

If you have feedback or need assistance with the MCP directory API, please join our Discord server