Skip to main content
Glama
panther-labs

Panther MCP Server

Official

list_data_models

Retrieve all data models from Panther to map log schema fields for Python rules, including custom field mappings and paginated metadata.

Instructions

List all data models from your Panther instance. Data models are used only in Panther's Python rules to map log type schema fields to a unified data model. They may also contain custom mappings for fields that are not part of the log type schema.

Returns paginated list of data models with metadata including mappings and log types.

Permissions:{'all_of': ['View Rules']}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cursorNoOptional cursor for pagination from a previous query
limitNoMaximum number of results to return (1-1000)

Implementation Reference

  • The handler function for the 'list_data_models' tool. It defines the input schema using Annotated types with Pydantic Field, fetches data models from the Panther REST API with pagination support, filters the response, and returns structured results or error.
    @mcp_tool( annotations={ "permissions": all_perms(Permission.RULE_READ), "readOnlyHint": True, } ) async def list_data_models( cursor: Annotated[ str | None, Field(description="Optional cursor for pagination from a previous query"), ] = None, limit: Annotated[ int, Field( description="Maximum number of results to return (1-1000)", examples=[100, 25, 50], ge=1, le=1000, ), ] = 100, ) -> dict[str, Any]: """List all data models from your Panther instance. Data models are used only in Panther's Python rules to map log type schema fields to a unified data model. They may also contain custom mappings for fields that are not part of the log type schema. Returns paginated list of data models with metadata including mappings and log types. """ logger.info(f"Fetching {limit} data models from Panther") try: # Prepare query parameters params = {"limit": limit} if cursor and cursor.lower() != "null": # Only add cursor if it's not null params["cursor"] = cursor logger.info(f"Using cursor for pagination: {cursor}") async with get_rest_client() as client: result, _ = await client.get("/data-models", params=params) # Extract data models and pagination info data_models = result.get("results", []) next_cursor = result.get("next") # Keep only specific fields for each data model to limit the amount of data returned filtered_data_models_metadata = [ { "id": data_model["id"], "description": data_model.get("description"), "displayName": data_model.get("displayName"), "enabled": data_model.get("enabled"), "logTypes": data_model.get("logTypes"), "mappings": data_model.get("mappings"), "managed": data_model.get("managed"), "createdAt": data_model.get("createdAt"), "lastModified": data_model.get("lastModified"), } for data_model in data_models ] logger.info( f"Successfully retrieved {len(filtered_data_models_metadata)} data models" ) return { "success": True, "data_models": filtered_data_models_metadata, "total_data_models": len(filtered_data_models_metadata), "has_next_page": bool(next_cursor), "next_cursor": next_cursor, } except Exception as e: logger.error(f"Failed to list data models: {str(e)}") return {"success": False, "message": f"Failed to list data models: {str(e)}"}
  • Location where all tools, including 'list_data_models', are registered with the FastMCP server instance by calling register_all_tools.
    register_all_tools(mcp)
  • Import of the data_models module in tools __init__.py, which triggers the execution of the @mcp_tool decorator on list_data_models to add it to the tool registry.
    data_models,

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/panther-labs/mcp-panther'

If you have feedback or need assistance with the MCP directory API, please join our Discord server