Skip to main content
Glama
panther-labs

Panther MCP Server

Official

list_data_models

Retrieve and paginate all data models from Panther MCP Server to map log type schema fields to a unified structure, enabling custom field mappings for Python rule creation.

Instructions

List all data models from your Panther instance. Data models are used only in Panther's Python rules to map log type schema fields to a unified data model. They may also contain custom mappings for fields that are not part of the log type schema.

Returns paginated list of data models with metadata including mappings and log types.

Permissions:{'all_of': ['View Rules']}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cursorNoOptional cursor for pagination from a previous query
limitNoMaximum number of results to return (1-1000)

Implementation Reference

  • The core handler implementation for the 'list_data_models' tool. Includes the @mcp_tool decorator with permissions, input schema via Annotated Pydantic Fields for cursor and limit parameters, and the full execution logic: prepares params, calls REST API /data-models endpoint, filters results to essential fields, handles pagination, logs activity, and returns structured response with success flag.
    @mcp_tool( annotations={ "permissions": all_perms(Permission.RULE_READ), "readOnlyHint": True, } ) async def list_data_models( cursor: Annotated[ str | None, Field(description="Optional cursor for pagination from a previous query"), ] = None, limit: Annotated[ int, Field( description="Maximum number of results to return (1-1000)", examples=[100, 25, 50], ge=1, le=1000, ), ] = 100, ) -> dict[str, Any]: """List all data models from your Panther instance. Data models are used only in Panther's Python rules to map log type schema fields to a unified data model. They may also contain custom mappings for fields that are not part of the log type schema. Returns paginated list of data models with metadata including mappings and log types. """ logger.info(f"Fetching {limit} data models from Panther") try: # Prepare query parameters params = {"limit": limit} if cursor and cursor.lower() != "null": # Only add cursor if it's not null params["cursor"] = cursor logger.info(f"Using cursor for pagination: {cursor}") async with get_rest_client() as client: result, _ = await client.get("/data-models", params=params) # Extract data models and pagination info data_models = result.get("results", []) next_cursor = result.get("next") # Keep only specific fields for each data model to limit the amount of data returned filtered_data_models_metadata = [ { "id": data_model["id"], "description": data_model.get("description"), "displayName": data_model.get("displayName"), "enabled": data_model.get("enabled"), "logTypes": data_model.get("logTypes"), "mappings": data_model.get("mappings"), "managed": data_model.get("managed"), "createdAt": data_model.get("createdAt"), "lastModified": data_model.get("lastModified"), } for data_model in data_models ] logger.info( f"Successfully retrieved {len(filtered_data_models_metadata)} data models" ) return { "success": True, "data_models": filtered_data_models_metadata, "total_data_models": len(filtered_data_models_metadata), "has_next_page": bool(next_cursor), "next_cursor": next_cursor, } except Exception as e: logger.error(f"Failed to list data models: {str(e)}") return {"success": False, "message": f"Failed to list data models: {str(e)}"}
  • Input schema definition using Pydantic's Annotated and Field for parameter validation, descriptions, examples, and constraints (limit 1-1000). Output is typed as dict[str, Any].
    async def list_data_models( cursor: Annotated[ str | None, Field(description="Optional cursor for pagination from a previous query"), ] = None, limit: Annotated[ int, Field( description="Maximum number of results to return (1-1000)", examples=[100, 25, 50], ge=1, le=1000, ), ] = 100, ) -> dict[str, Any]:
  • Registration of all tools (including list_data_models) to the FastMCP server instance via register_all_tools(mcp), which collects all @mcp_tool decorated functions across imported modules.
    mcp = FastMCP(MCP_SERVER_NAME, dependencies=deps) # Register all tools with MCP using the registry register_all_tools(mcp) # Register all prompts with MCP using the registry register_all_prompts(mcp) # Register all resources with MCP using the registry register_all_resources(mcp)
  • Import of the data_models module in tools __init__.py, which triggers execution of the @mcp_tool decorator to add list_data_models to the tool registry.
    data_models,

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/panther-labs/mcp-panther'

If you have feedback or need assistance with the MCP directory API, please join our Discord server