Skip to main content
Glama
panther-labs

Panther MCP Server

Official

list_log_type_schemas

Read-only

Retrieve available log type schemas in Panther to understand transformation rules for converting raw audit logs into structured data for analysis and detection.

Instructions

List all available log type schemas in Panther. Schemas are transformation instructions that convert raw audit logs into structured data for the data lake and real-time Python rules.

Returns: Dict containing: - success: Boolean indicating if the query was successful - schemas: List of schemas, each containing: - name: Schema name (Log Type) - description: Schema description - revision: Schema revision number - isArchived: Whether the schema is archived - isManaged: Whether the schema is managed by a pack - referenceURL: Optional documentation URL - createdAt: Creation timestamp - updatedAt: Last update timestamp - message: Error message if unsuccessful

Permissions:{'all_of': ['View Log Sources']}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
containsNoOptional filter by name or schema field name
is_archivedNoFilter by archive status (default: False shows non-archived)
is_in_useNoFilter for used/active schemas (default: False shows all)
is_managedNoFilter for pack-managed schemas (default: False shows all)

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The handler function that implements the core logic of listing log type schemas using a GraphQL query with optional filters. Includes input schema definitions via Pydantic and output formatting.
    @mcp_tool(
        annotations={
            "permissions": all_perms(Permission.LOG_SOURCE_READ),
            "readOnlyHint": True,
        }
    )
    async def list_log_type_schemas(
        contains: Annotated[
            str | None,
            Field(description="Optional filter by name or schema field name"),
        ] = None,
        is_archived: Annotated[
            bool,
            Field(
                description="Filter by archive status (default: False shows non-archived)"
            ),
        ] = False,
        is_in_use: Annotated[
            bool,
            Field(description="Filter for used/active schemas (default: False shows all)"),
        ] = False,
        is_managed: Annotated[
            bool,
            Field(description="Filter for pack-managed schemas (default: False shows all)"),
        ] = False,
    ) -> dict[str, Any]:
        """List all available log type schemas in Panther. Schemas are transformation instructions that convert raw audit logs
        into structured data for the data lake and real-time Python rules.
    
        Returns:
            Dict containing:
            - success: Boolean indicating if the query was successful
            - schemas: List of schemas, each containing:
                - name: Schema name (Log Type)
                - description: Schema description
                - revision: Schema revision number
                - isArchived: Whether the schema is archived
                - isManaged: Whether the schema is managed by a pack
                - referenceURL: Optional documentation URL
                - createdAt: Creation timestamp
                - updatedAt: Last update timestamp
            - message: Error message if unsuccessful
        """
        logger.info("Fetching available schemas")
    
        try:
            # Prepare input variables, only including non-default values
            input_vars = {}
            if contains is not None:
                input_vars["contains"] = contains
            if is_archived:
                input_vars["isArchived"] = is_archived
            if is_in_use:
                input_vars["isInUse"] = is_in_use
            if is_managed:
                input_vars["isManaged"] = is_managed
    
            variables = {"input": input_vars}
    
            # Execute the query using shared client
            result = await _execute_query(LIST_SCHEMAS_QUERY, variables)
    
            # Get schemas data and ensure we have the required structure
            schemas_data = result.get("schemas")
            if not schemas_data:
                return {"success": False, "message": "No schemas data returned from server"}
    
            edges = schemas_data.get("edges", [])
            schemas = [edge["node"] for edge in edges] if edges else []
    
            logger.info(f"Successfully retrieved {len(schemas)} schemas")
    
            # Format the response
            return {
                "success": True,
                "schemas": schemas,
            }
    
        except Exception as e:
            logger.error(f"Failed to fetch schemas: {str(e)}")
            return {
                "success": False,
                "message": f"Failed to fetch schemas: {str(e)}",
            }
  • The @mcp_tool decorator registers list_log_type_schemas in the tool registry during module import.
    @mcp_tool(
        annotations={
            "permissions": all_perms(Permission.LOG_SOURCE_READ),
            "readOnlyHint": True,
        }
    )
  • Import of the schemas module in tools __init__.py triggers loading and registration of the list_log_type_schemas tool via its decorator.
    schemas,
  • Calls register_all_tools(mcp) to register all tools from the registry, including list_log_type_schemas, with the FastMCP server instance.
    register_all_tools(mcp)
  • Pydantic-based input schema parameters and docstring defining expected output structure for the tool.
    async def list_log_type_schemas(
        contains: Annotated[
            str | None,
            Field(description="Optional filter by name or schema field name"),
        ] = None,
        is_archived: Annotated[
            bool,
            Field(
                description="Filter by archive status (default: False shows non-archived)"
            ),
        ] = False,
        is_in_use: Annotated[
            bool,
            Field(description="Filter for used/active schemas (default: False shows all)"),
        ] = False,
        is_managed: Annotated[
            bool,
            Field(description="Filter for pack-managed schemas (default: False shows all)"),
        ] = False,
    ) -> dict[str, Any]:
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations already declare readOnlyHint=true, indicating a safe read operation. The description adds valuable context beyond this: it explains what schemas are (transformation instructions), mentions the return structure in detail, and includes permissions requirements. However, it doesn't cover rate limits or pagination behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured with clear sections: purpose explanation, return value details, and permissions. Every sentence adds value without redundancy, and the information is front-loaded with the core purpose stated first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has comprehensive annotations (readOnlyHint), 100% schema coverage, and a detailed output schema in the description, the description provides complete context. It explains the tool's purpose, return structure, and permissions, making it fully adequate for an agent to understand and use the tool correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 100% schema description coverage, the input schema fully documents all 4 parameters. The description doesn't add any parameter-specific information beyond what's in the schema, so it meets the baseline of 3 where the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'List' and the resource 'all available log type schemas in Panther', specifying that schemas are transformation instructions for audit logs. It distinguishes from sibling tools like 'get_log_type_schema_details' by indicating this lists all schemas rather than getting details of a specific one.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage for retrieving schemas for data lake and Python rules, but does not explicitly state when to use this tool versus alternatives like 'get_log_type_schema_details' or other list tools. The permissions requirement provides some context but not explicit guidance on tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/panther-labs/mcp-panther'

If you have feedback or need assistance with the MCP directory API, please join our Discord server