Skip to main content
Glama
panther-labs

Panther MCP Server

Official

get_data_model

Read-only

Retrieve detailed information about a Panther data model, including Python body code and UDM mappings for security monitoring analysis.

Instructions

Get detailed information about a Panther data model, including the mappings and body

Returns complete data model information including Python body code and UDM mappings.

Permissions:{'all_of': ['View Rules']}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
data_model_idYesThe ID of the data model to fetch

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • Complete implementation of the 'get_data_model' MCP tool. Decorated with @mcp_tool for auto-registration using the function name as tool name. Defines input schema via Annotated[str, Field(...)] for 'data_model_id' parameter. The function body fetches the data model details from the Panther REST API endpoint '/data-models/{data_model_id}', handles 404 (not found) and other exceptions, returning structured JSON responses.
    @mcp_tool(
        annotations={
            "permissions": all_perms(Permission.RULE_READ),
            "readOnlyHint": True,
        }
    )
    async def get_data_model(
        data_model_id: Annotated[
            str,
            Field(
                description="The ID of the data model to fetch",
                examples=["MyDataModel", "AWS_CloudTrail", "StandardUser"],
            ),
        ],
    ) -> dict[str, Any]:
        """Get detailed information about a Panther data model, including the mappings and body
    
        Returns complete data model information including Python body code and UDM mappings.
        """
        logger.info(f"Fetching data model details for data model ID: {data_model_id}")
    
        try:
            async with get_rest_client() as client:
                # Allow 404 as a valid response to handle not found case
                result, status = await client.get(
                    f"/data-models/{data_model_id}", expected_codes=[200, 404]
                )
    
                if status == 404:
                    logger.warning(f"No data model found with ID: {data_model_id}")
                    return {
                        "success": False,
                        "message": f"No data model found with ID: {data_model_id}",
                    }
    
            logger.info(
                f"Successfully retrieved data model details for data model ID: {data_model_id}"
            )
            return {"success": True, "data_model": result}
        except Exception as e:
            logger.error(f"Failed to get data model details: {str(e)}")
            return {
                "success": False,
                "message": f"Failed to get data model details: {str(e)}",
            }
  • Calls register_all_tools(mcp) in the main server setup, which registers all decorated tools including 'get_data_model' with the FastMCP instance.
    register_all_tools(mcp)
  • Implementation of register_all_tools that iterates over the global _tool_registry (populated by @mcp_tool decorators), extracts metadata, and invokes mcp_instance.tool(name=func.__name__ or metadata['name'], ...) for each tool.
    def register_all_tools(mcp_instance) -> None:
        """
        Register all tools marked with @mcp_tool with the given MCP instance.
    
        Args:
            mcp_instance: The FastMCP instance to register tools with
        """
        logger.info(f"Registering {len(_tool_registry)} tools with MCP")
    
        # Sort tools by name
        sorted_funcs = sorted(_tool_registry, key=lambda f: f.__name__)
        for tool in sorted_funcs:
            logger.debug(f"Registering tool: {tool.__name__}")
    
            # Get tool metadata if it exists
            metadata = getattr(tool, "_mcp_tool_metadata", {})
    
            annotations = metadata.get("annotations", {})
            # Create tool decorator with metadata
            tool_decorator = mcp_instance.tool(
                name=metadata.get("name"),
                description=metadata.get("description"),
                annotations=annotations,
            )
    
            if annotations and annotations.get("permissions"):
                if not tool.__doc__:
                    tool.__doc__ = ""
                tool.__doc__ += f"\n\n Permissions:{annotations.get('permissions')}"
    
            # Register the tool
            tool_decorator(tool)
    
        logger.info("All tools registered successfully")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already declare readOnlyHint=true, so the agent knows this is a safe read operation. The description adds useful context: it specifies the permissions required ('View Rules'), which isn't covered by annotations. However, it doesn't disclose other behavioral traits like rate limits, error conditions, or what 'complete' information entails beyond the schema. No contradiction with annotations exists.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized with three sentences that are front-loaded: the first states the purpose, the second elaborates on returns, and the third adds permissions. There's minor redundancy between the first two sentences, but overall it's efficient with zero waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (simple read operation), annotations cover safety, schema covers parameters fully, and an output schema exists (so return values are documented), the description is mostly complete. It adds permissions context, which is valuable. However, it lacks guidance on usage versus siblings, which is a minor gap in this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, with the parameter 'data_model_id' fully documented in the schema. The description adds no additional meaning about parameters beyond what the schema provides (e.g., no examples or usage notes). With high schema coverage, the baseline score of 3 is appropriate as the description doesn't compensate but doesn't need to.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get detailed information about a Panther data model, including the mappings and body' and 'Returns complete data model information including Python body code and UDM mappings.' This specifies the verb ('Get'), resource ('Panther data model'), and scope of information returned. However, it doesn't explicitly differentiate from sibling tools like 'list_data_models' or 'get_detection' beyond the data model focus.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions permissions ('View Rules'), but this doesn't help choose between this and sibling tools like 'list_data_models' (for listing) or 'get_detection' (for other resources). There's no explicit when/when-not context or named alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/panther-labs/mcp-panther'

If you have feedback or need assistance with the MCP directory API, please join our Discord server