Skip to main content
Glama

Panther MCP Server

Official
Apache 2.0
27
  • Apple

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault

No arguments

Schema

Prompts

Interactive templates invoked by user choice

NameDescription
prioritize-open-alertsPerforms detailed actor-based analysis and prioritization in the specified time period (YYYY-MM-DD HH:MM:SSZ format).
get-monthly-log-sources-reportGenerates a monthly report on the health of all Panther log sources for a given month and year, and triages any unhealthy sources.
get-detection-rule-errorsFind detection rule errors between the specified dates (YYYY-MM-DD HH:MM:SSZ format) and perform root cause analysis.
get-monthly-detection-quality-reportGenerates a comprehensive detection quality report for analyzing alert data a given month and year to identify problematic rules and opportunities for improvement, including alerts, detection errors, and system errors.
investigate-actor-activityPerforms an exhaustive investigation of a specific actor’s activity, including both alerted and non-alerted events, and produces a comprehensive final report with confidence assessment.

Resources

Contextual data attached and managed by the client

NameDescription
get_panther_configGet the Panther configuration.

Tools

Functions exposed to the LLM to take actions

NameDescription
add_alert_comment

Add a comment to a Panther alert. Comments support Markdown formatting.

Returns: Dict containing: - success: Boolean indicating if the comment was added successfully - comment: Created comment information if successful - message: Error message if unsuccessful

Permissions:{'all_of': ['Manage Alerts']}

disable_detection

Disable a Panther detection by setting enabled to false.

Permissions:{'any_of': ['Manage Rules', 'Manage Policies']}

get_alert

Get detailed information about a specific Panther alert by ID

Permissions:{'all_of': ['Read Alerts']}

get_alert_events

Get events for a specific Panther alert. Order of events is not guaranteed. This tool does not support pagination to prevent long-running, expensive queries.

Returns: Dict containing: - success: Boolean indicating if the request was successful - events: List of most recent events if successful - message: Error message if unsuccessful

Permissions:{'all_of': ['Read Alerts']}

get_bytes_processed_per_log_type_and_source

Retrieves data ingestion metrics showing total bytes processed per log type and source, helping analyze data volume patterns.

Returns: Dict: - success: Boolean indicating if the query was successful - bytes_processed: List of series with breakdown by log type and source - total_bytes: Total bytes processed in the period - start_date: Start date of the period - end_date: End date of the period - interval_in_minutes: Grouping interval for the metrics

Permissions:{'all_of': ['Read Panther Metrics']}

get_data_model

Get detailed information about a Panther data model, including the mappings and body

Returns complete data model information including Python body code and UDM mappings.

Permissions:{'all_of': ['View Rules']}

get_detection

Get detailed information about a Panther detection, including the detection body and tests.

Permissions:{'all_of': ['View Rules', 'View Policies']}

get_global_helper

Get detailed information about a Panther global helper by ID

Returns complete global helper information including Python body code and usage details.

Permissions:{'all_of': ['View Rules']}

get_http_log_source

Get detailed information about a specific HTTP log source by ID.

HTTP log sources are used to collect logs via HTTP endpoints/webhooks. This tool provides detailed configuration information for troubleshooting and monitoring HTTP log source integrations.

Args: source_id: The ID of the HTTP log source to retrieve

Returns: Dict containing: - success: Boolean indicating if the query was successful - source: HTTP log source information if successful, containing: - integrationId: The source ID - integrationLabel: The source name/label - logTypes: List of log types this source handles - logStreamType: Stream type (Auto, JSON, JsonArray, etc.) - logStreamTypeOptions: Additional stream type configuration - authMethod: Authentication method (None, Bearer, Basic, etc.) - authBearerToken: Bearer token if using Bearer auth - authUsername: Username if using Basic auth - authPassword: Password if using Basic auth - authHeaderKey: Header key for HMAC/SharedSecret auth - authSecretValue: Secret value for HMAC/SharedSecret auth - authHmacAlg: HMAC algorithm if using HMAC auth - message: Error message if unsuccessful

Permissions:{'all_of': ['View Log Sources']}

get_log_type_schema_details

Get detailed information for specific log type schemas, including their full specifications. Limited to 5 schemas at a time to prevent response size issues.

Returns: Dict containing: - success: Boolean indicating if the query was successful - schemas: List of schemas, each containing: - name: Schema name (Log Type) - description: Schema description - spec: Schema specification in YAML/JSON format - version: Schema version number - revision: Schema revision number - isArchived: Whether the schema is archived - isManaged: Whether the schema is managed by a pack - isFieldDiscoveryEnabled: Whether automatic field discovery is enabled - referenceURL: Optional documentation URL - discoveredSpec: The schema discovered spec - createdAt: Creation timestamp - updatedAt: Last update timestamp - message: Error message if unsuccessful

Permissions:{'all_of': ['View Rules']}

get_permissions

Get the current user's permissions. Use this to diagnose permission errors and determine if a new API token is needed.

get_role

Get detailed information about a Panther role by ID

Returns complete role information including all permissions and settings.

Permissions:{'all_of': ['Read User Info']}

get_rule_alert_metrics

Gets alert metrics grouped by detection rule for ALL alert types, including alerts, detection errors, and system errors within a given time period. Use this tool to identify hot spots in alerts and use list_alerts for specific alert details.

Returns: Dict: - alerts_per_rule: List of series with entityId, label, and value - total_alerts: Total number of alerts in the period - start_date: Start date of the period - end_date: End date of the period - interval_in_minutes: Grouping interval for the metrics - rule_ids: List of rule IDs if provided

Permissions:{'all_of': ['Read Panther Metrics']}

get_scheduled_query

Get detailed information about a specific scheduled query by ID.

Returns complete scheduled query information including SQL, schedule configuration, and metadata.

Returns: Dict containing: - success: Boolean indicating if the query was successful - query: Scheduled query information if successful, containing: - id: Query ID - name: Query name - description: Query description - sql: The SQL query text - schedule: Schedule configuration (cron, rate, timeout) - managed: Whether the query is managed by Panther - createdAt: Creation timestamp - updatedAt: Last update timestamp - message: Error message if unsuccessful

Permissions:{'all_of': ['Query Data Lake']}

get_severity_alert_metrics

Gets alert metrics grouped by severity for rule and policy alert types within a given time period. Use this tool to identify hot spots in your alerts, and use the list_alerts tool for specific details. Keep in mind that these metrics combine errors and alerts, so there may be inconsistencies from what list_alerts returns.

Returns: Dict: - alerts_per_severity: List of series with breakdown by severity - total_alerts: Total number of alerts in the period - start_date: Start date of the period - end_date: End date of the period - interval_in_minutes: Grouping interval for the metrics

Permissions:{'all_of': ['Read Panther Metrics']}

get_table_schema

Get column details for a specific data lake table.

IMPORTANT: This returns the table structure in Snowflake. For writing optimal queries, ALSO call get_panther_log_type_schema() to understand:

  • Nested object structures (only shown as 'object' type here)

  • Which fields map to p_any_* indicator columns

  • Array element structures

Example workflow:

  1. get_panther_log_type_schema(["AWS.CloudTrail"]) - understand structure

  2. get_table_schema("panther_logs.public", "aws_cloudtrail") - get column names/types

  3. Write query using both: nested paths from log schema, column names from table schema

Returns: Dict containing: - success: Boolean indicating if the query was successful - name: Table name - display_name: Table display name - description: Table description - log_type: Log type - columns: List of columns, each containing: - name: Column name - type: Column data type - description: Column description - message: Error message if unsuccessful

Permissions:{'all_of': ['Query Data Lake']}

get_user

Get detailed information about a Panther user by ID

Returns complete user information including email, names, role, authentication status, and timestamps.

Permissions:{'all_of': ['Read User Info']}

list_alert_comments

Get all comments for a specific Panther alert.

Returns: Dict containing: - success: Boolean indicating if the request was successful - comments: List of comments if successful, each containing: - id: The comment ID - body: The comment text - createdAt: Timestamp when the comment was created - createdBy: Information about the user who created the comment - format: The format of the comment (HTML or PLAIN_TEXT or JSON_SCHEMA) - message: Error message if unsuccessful

Permissions:{'all_of': ['Read Alerts']}

list_alerts

List alerts from Panther with comprehensive filtering options

Args: start_date: Optional start date in ISO 8601 format (e.g. "2024-03-20T00:00:00Z") end_date: Optional end date in ISO 8601 format (e.g. "2024-03-21T00:00:00Z") severities: Optional list of severities to filter by (e.g. ["CRITICAL", "HIGH", "MEDIUM", "LOW", "INFO"]) statuses: Optional list of statuses to filter by (e.g. ["OPEN", "TRIAGED", "RESOLVED", "CLOSED"]) cursor: Optional cursor for pagination from a previous query detection_id: Optional detection ID to filter alerts by. If provided, date range is not required. event_count_max: Optional maximum number of events that returned alerts must have event_count_min: Optional minimum number of events that returned alerts must have log_sources: Optional list of log source IDs to filter alerts by log_types: Optional list of log type names to filter alerts by name_contains: Optional string to search for in alert titles page_size: Number of results per page (default: 25, maximum: 50) resource_types: Optional list of AWS resource type names to filter alerts by subtypes: Optional list of alert subtypes. Valid values depend on alert_type: - When alert_type="ALERT": ["POLICY", "RULE", "SCHEDULED_RULE"] - When alert_type="DETECTION_ERROR": ["RULE_ERROR", "SCHEDULED_RULE_ERROR"] - When alert_type="SYSTEM_ERROR": subtypes are not allowed alert_type: Type of alerts to return (default: "ALERT"). One of: - "ALERT": Regular detection alerts - "DETECTION_ERROR": Alerts from detection errors - "SYSTEM_ERROR": System error alerts

Permissions:{'all_of': ['Read Alerts']}

list_data_models

List all data models from your Panther instance. Data models are used only in Panther's Python rules to map log type schema fields to a unified data model. They may also contain custom mappings for fields that are not part of the log type schema.

Returns paginated list of data models with metadata including mappings and log types.

Permissions:{'all_of': ['View Rules']}

list_database_tables

List all available tables in a Panther Database.

Required: Only use valid database names obtained from list_databases

Returns: Dict containing: - success: Boolean indicating if the query was successful - tables: List of tables, each containing: - name: Table name - description: Table description - log_type: Log type - database: Database name - message: Error message if unsuccessful

Permissions:{'all_of': ['Query Data Lake']}

list_databases

List all available datalake databases in Panther.

Returns: Dict containing: - success: Boolean indicating if the query was successful - databases: List of databases, each containing: - name: Database name - description: Database description - message: Error message if unsuccessful

Permissions:{'all_of': ['Query Data Lake']}

list_detections

List detections from your Panther instance with support for multiple detection types and filtering.

Permissions:{'all_of': ['View Rules', 'View Policies']}

list_global_helpers

List all global helpers from your Panther instance. Global helpers are shared Python functions that can be used across multiple rules, policies, and other detections.

Returns paginated list of global helpers with metadata including descriptions and code.

Permissions:{'all_of': ['View Rules']}

list_log_sources

List log sources from Panther with optional filters.

Permissions:{'all_of': ['View Rules']}

list_log_type_schemas

List all available log type schemas in Panther. Schemas are transformation instructions that convert raw audit logs into structured data for the data lake and real-time Python rules.

Returns: Dict containing: - success: Boolean indicating if the query was successful - schemas: List of schemas, each containing: - name: Schema name (Log Type) - description: Schema description - revision: Schema revision number - isArchived: Whether the schema is archived - isManaged: Whether the schema is managed by a pack - referenceURL: Optional documentation URL - createdAt: Creation timestamp - updatedAt: Last update timestamp - message: Error message if unsuccessful

Permissions:{'all_of': ['View Log Sources']}

list_roles

List all roles from your Panther instance.

Returns list of roles with metadata including permissions and settings.

Permissions:{'all_of': ['Read User Info']}

list_scheduled_queries

List all scheduled queries from your Panther instance.

Scheduled queries are SQL queries that run automatically on a defined schedule for recurring analysis, reporting, and monitoring tasks.

Note: SQL content is excluded from list responses to prevent token limits. Use get_scheduled_query() to retrieve the full SQL for a specific query.

Returns: Dict containing: - success: Boolean indicating if the query was successful - queries: List of scheduled queries if successful, each containing: - id: Query ID - name: Query name - description: Query description - schedule: Schedule configuration (cron, rate, timeout) - managed: Whether the query is managed by Panther - createdAt: Creation timestamp - updatedAt: Last update timestamp - total_queries: Number of queries returned - has_next_page: Boolean indicating if more results are available - next_cursor: Cursor for fetching the next page of results - message: Error message if unsuccessful

Permissions:{'all_of': ['Query Data Lake']}

list_users

List all Panther user accounts.

Returns: Dict containing: - success: Boolean indicating if the query was successful - users: List of user accounts if successful - total_users: Number of users returned - has_next_page: Boolean indicating if more results are available - next_cursor: Cursor for fetching the next page of results - message: Error message if unsuccessful

Permissions:{'all_of': ['Read User Info']}

query_data_lake

Execute custom SQL queries against Panther's data lake for advanced data analysis and aggregation.

All queries MUST conform to Snowflake's SQL syntax.

If the table has a p_event_time column, it must use a WHERE clause to filter upon it.

Guidance:

For efficiency, when checking for values in an array, use the snowflake function ARRAY_CONTAINS( <value_expr> , <array> ).

When using ARRAY_CONTAINS, make sure to cast the value_expr to a variant, for example: ARRAY_CONTAINS('example@example.com'::VARIANT, p_any_emails).

When interacting with object type columns use dot notation to traverse a path in a JSON object: <column>:<level1_element>.<level2_element>.<level3_element>. Optionally enclose element names in double quotes: <column>:"<level1_element>"."<level2_element>"."<level3_element>".

If an object/JSON element name does not conform to Snowflake SQL identifier rules, for example if it contains spaces, then you must enclose the element name in double quotes.

Returns: Dict containing: - success: Boolean indicating if the query was successful - status: Status of the query (e.g., "succeeded", "failed", "cancelled") - message: Error message if unsuccessful - query_id: The unique identifier for the query (null if query execution failed) - results: List of query result rows - column_info: Dict containing column names and types - stats: Dict containing stats about the query - has_next_page: Boolean indicating if there are more results available - end_cursor: Cursor for fetching the next page of results, or null if no more pages

Permissions:{'all_of': ['Query Data Lake']}

summarize_alert_events

Analyze patterns and relationships across multiple alerts by aggregating their event data into time-based groups.

For each time window (configurable from 1-60 minutes), the tool collects unique entities (IPs, emails, usernames, trace IDs) and alert metadata (IDs, rules, severities) to help identify related activities.

Results are ordered chronologically with the most recent first, helping analysts identify temporal patterns, common entities, and potential incident scope.

Returns: Dict containing: - success: Boolean indicating if the query was successful - status: Status of the query (e.g., "succeeded", "failed", "cancelled") - message: Error message if unsuccessful - results: List of query result rows - column_info: Dict containing column names and types - stats: Dict containing stats about the query - has_next_page: Boolean indicating if there are more results available - end_cursor: Cursor for fetching the next page of results, or null if no more pages

Permissions:{'all_of': ['Query Data Lake']}

update_alert_assignee

Update the assignee of one or more alerts through the assignee's ID.

Returns: Dict containing: - success: Boolean indicating if the update was successful - alerts: List of updated alert IDs if successful - message: Error message if unsuccessful

Permissions:{'all_of': ['Manage Alerts']}

update_alert_status

Update the status of one or more Panther alerts.

Returns: Dict containing: - success: Boolean indicating if the update was successful - alerts: List of updated alert IDs if successful - message: Error message if unsuccessful

Permissions:{'all_of': ['Manage Alerts']}

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/panther-labs/mcp-panther'

If you have feedback or need assistance with the MCP directory API, please join our Discord server