Skip to main content
Glama
panther-labs

Panther MCP Server

Official

get_bytes_processed_metrics

Read-only

Retrieve data ingestion metrics to analyze volume patterns by log type and source, showing total bytes processed within specified time periods.

Instructions

Retrieves data ingestion metrics showing total bytes processed per log type and source, helping analyze data volume patterns.

Returns: Dict: - success: Boolean indicating if the query was successful - bytes_processed: List of series with breakdown by log type and source - total_bytes: Total bytes processed in the period - start_date: Start date of the period - end_date: End date of the period - interval_in_minutes: Grouping interval for the metrics

Permissions:{'all_of': ['Read Panther Metrics']}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
start_dateNoOptional start date in ISO-8601 format. If provided, defaults to the start of the current day UTC.
end_dateNoOptional end date in ISO-8601 format. If provided, defaults to the end of the current day UTC.
interval_in_minutesNoHow data points are aggregated over time, with smaller intervals providing more granular detail of when events occurred, while larger intervals show broader trends but obscure the precise timing of incidents.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Primary handler function for the 'get_bytes_processed_metrics' tool. Includes @mcp_tool decorator for automatic registration, input schema via Annotated Fields, logic to fetch and process bytes processed metrics from Panther GraphQL API.
    @mcp_tool(
        annotations={
            "permissions": all_perms(Permission.SUMMARY_READ),
            "readOnlyHint": True,
        }
    )
    async def get_bytes_processed_metrics(
        start_date: Annotated[
            str | None,
            Field(
                description="Optional start date in ISO-8601 format. If provided, defaults to the start of the current day UTC.",
                examples=["2024-03-20T00:00:00Z"],
            ),
        ] = None,
        end_date: Annotated[
            str | None,
            Field(
                description="Optional end date in ISO-8601 format. If provided, defaults to the end of the current day UTC.",
                examples=["2024-03-20T00:00:00Z"],
            ),
        ] = None,
        interval_in_minutes: Annotated[
            int,
            BeforeValidator(_validate_interval),
            Field(
                description="How data points are aggregated over time, with smaller intervals providing more granular detail of when events occurred, while larger intervals show broader trends but obscure the precise timing of incidents.",
                examples=[60, 720, 1440],
            ),
        ] = 1440,
    ) -> dict[str, Any]:
        """Retrieves data ingestion metrics showing total bytes processed per log type and source, helping analyze data volume patterns.
    
        Returns:
            Dict:
            - success: Boolean indicating if the query was successful
            - bytes_processed: List of series with breakdown by log type and source
            - total_bytes: Total bytes processed in the period
            - start_date: Start date of the period
            - end_date: End date of the period
            - interval_in_minutes: Grouping interval for the metrics
        """
        try:
            # If start or end date is missing, use week's date range
            if not start_date or not end_date:
                default_start_date, default_end_date = _get_week_date_range()
                if not start_date:
                    start_date = default_start_date
                if not end_date:
                    end_date = default_end_date
    
            logger.info(
                f"Fetching bytes processed metrics from {start_date} to {end_date} with {interval_in_minutes} minute interval"
            )
    
            # Prepare variables
            variables = {
                "input": {
                    "fromDate": start_date,
                    "toDate": end_date,
                    "intervalInMinutes": interval_in_minutes,
                }
            }
    
            # Execute query
            result = await _execute_query(METRICS_BYTES_PROCESSED_QUERY, variables)
    
            if not result or "metrics" not in result:
                logger.error(f"Could not find key 'metrics' in result: {result}")
                raise Exception("Failed to fetch metrics data")
    
            metrics_data = result["metrics"]
            bytes_processed = metrics_data["bytesProcessedPerSource"]
    
            # Calculate total bytes across all series
            total_bytes = sum(series["value"] for series in bytes_processed)
    
            return {
                "success": True,
                "bytes_processed": bytes_processed,
                "total_bytes": total_bytes,
                "start_date": start_date,
                "end_date": end_date,
                "interval_in_minutes": interval_in_minutes,
            }
    
        except Exception as e:
            logger.error(f"Failed to fetch bytes processed metrics: {str(e)}")
            return {
                "success": False,
                "message": f"Failed to fetch bytes processed metrics: {str(e)}",
            }
  • GraphQL query definition used by the tool handler. Defines the input (MetricsInput!) and output structure for bytes processed metrics, serving as the backend schema.
    METRICS_BYTES_PROCESSED_QUERY = gql("""
    query GetBytesProcessedMetrics($input: MetricsInput!) {
        metrics(input: $input) {
            bytesProcessedPerSource {
                label
                value
                breakdown
            }
        }
    }
    """)
  • Explicit call to register_all_tools which collects and registers all @mcp_tool decorated functions, including get_bytes_processed_metrics, with the MCP server instance.
        from .panther_mcp_core.tools.registry import register_all_tools
    
    # Create the MCP server with lifespan context for shared HTTP client management
    # Note: Dependencies are declared in fastmcp.json for FastMCP v2.14.0+
    mcp = FastMCP(MCP_SERVER_NAME, lifespan=lifespan)
    
    # Register all tools with MCP using the registry
    register_all_tools(mcp)
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, indicating this is a safe read operation. The description adds value by specifying the return structure (e.g., success flag, breakdowns, totals) and permissions ('Read Panther Metrics'), which aren't covered by annotations. However, it doesn't disclose other behavioral traits like rate limits, caching, or error handling. With annotations handling the safety profile, the description offers moderate additional context, aligning with a baseline score.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose. The 'Returns' section is detailed but necessary for clarity, and the permissions note is concise. However, the structure could be slightly improved by integrating the permissions into the main flow or using bullet points for better readability, but overall, it's efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (3 parameters, 100% schema coverage, annotations, and an output schema implied by the return description), the description is fairly complete. It explains what the tool does, the return format, and permissions. The output schema details in the description compensate for the lack of a formal output schema field. However, it could benefit from more usage context or examples to fully guide the agent, keeping it from a perfect score.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with each parameter (start_date, end_date, interval_in_minutes) well-documented in the schema. The description doesn't add any parameter-specific details beyond what's in the schema, such as explaining how the interval affects the 'bytes_processed' list. Since the schema does the heavy lifting, the baseline score of 3 is appropriate, as the description provides no extra parameter semantics.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Retrieves data ingestion metrics showing total bytes processed per log type and source, helping analyze data volume patterns.' It specifies the verb ('Retrieves'), resource ('data ingestion metrics'), and scope ('per log type and source'), making the purpose unambiguous. However, it doesn't explicitly differentiate from sibling tools like 'get_rule_alert_metrics' or 'get_severity_alert_metrics', which also retrieve metrics but for different aspects, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions analyzing 'data volume patterns,' but doesn't specify scenarios, prerequisites, or exclusions. For example, it doesn't clarify if this is for real-time monitoring, historical analysis, or how it compares to other metrics tools in the sibling list. This lack of context leaves the agent without clear usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/panther-labs/mcp-panther'

If you have feedback or need assistance with the MCP directory API, please join our Discord server