Skip to main content
Glama
panther-labs

Panther MCP Server

Official

get_table_schema

Read-only

Retrieve column names and data types for data lake tables to understand table structure and write optimized queries in Snowflake.

Instructions

Get column details for a specific data lake table.

IMPORTANT: This returns the table structure in Snowflake. For writing optimal queries, ALSO call get_panther_log_type_schema() to understand:

  • Nested object structures (only shown as 'object' type here)

  • Which fields map to p_any_* indicator columns

  • Array element structures

Example workflow:

  1. get_panther_log_type_schema(["AWS.CloudTrail"]) - understand structure

  2. get_table_schema("panther_logs.public", "aws_cloudtrail") - get column names/types

  3. Write query using both: nested paths from log schema, column names from table schema

Returns: Dict containing: - success: Boolean indicating if the query was successful - name: Table name - display_name: Table display name - description: Table description - log_type: Log type - columns: List of columns, each containing: - name: Column name - type: Column data type - description: Column description - message: Error message if unsuccessful

Permissions:{'all_of': ['Query Data Lake']}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
database_nameYesThe name of the database where the table is located
table_nameYesThe name of the table to get columns for

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The core handler function that implements the get_table_schema tool. It takes database_name and table_name parameters, constructs a GraphQL query using GET_COLUMNS_FOR_TABLE_QUERY, executes it via _execute_query, and returns the table's column information including names, types, and descriptions.
    async def get_table_schema(
        database_name: Annotated[
            str,
            Field(
                description="The name of the database where the table is located",
                examples=["panther_logs.public"],
            ),
        ],
        table_name: Annotated[
            str,
            Field(
                description="The name of the table to get columns for",
                examples=["Panther.Audit"],
            ),
        ],
    ) -> Dict[str, Any]:
        """Get column details for a specific data lake table.
    
        IMPORTANT: This returns the table structure in Snowflake. For writing
        optimal queries, ALSO call get_panther_log_type_schema() to understand:
        - Nested object structures (only shown as 'object' type here)
        - Which fields map to p_any_* indicator columns
        - Array element structures
    
        Example workflow:
        1. get_panther_log_type_schema(["AWS.CloudTrail"]) - understand structure
        2. get_table_schema("panther_logs.public", "aws_cloudtrail") - get column names/types
        3. Write query using both: nested paths from log schema, column names from table schema
    
        Returns:
            Dict containing:
            - success: Boolean indicating if the query was successful
            - name: Table name
            - display_name: Table display name
            - description: Table description
            - log_type: Log type
            - columns: List of columns, each containing:
                - name: Column name
                - type: Column data type
                - description: Column description
            - message: Error message if unsuccessful
        """
        table_full_path = f"{database_name}.{table_name}"
        logger.info(f"Fetching column information for table: {table_full_path}")
    
        try:
            # Prepare input variables
            variables = {"databaseName": database_name, "tableName": table_name}
    
            logger.debug(f"Query variables: {variables}")
    
            # Execute the query using shared client
            result = await _execute_query(GET_COLUMNS_FOR_TABLE_QUERY, variables)
    
            # Get query data
            query_data = result.get("dataLakeDatabaseTable", {})
            columns = query_data.get("columns", [])
    
            if not columns:
                logger.warning(f"No columns found for table: {table_full_path}")
                return {
                    "success": False,
                    "message": f"No columns found for table: {table_full_path}",
                }
    
            logger.info(f"Successfully retrieved {len(columns)} columns")
    
            # Format the response
            return {
                "success": True,
                "status": "succeeded",
                **query_data,
                "stats": {
                    "table_count": len(columns),
                },
            }
        except Exception as e:
            logger.error(f"Failed to get columns for table: {str(e)}")
            return {
                "success": False,
                "message": f"Failed to get columns for table: {str(e)}",
            }
  • The @mcp_tool decorator call that registers get_table_schema as an MCP tool, specifying required permissions (DATA_ANALYTICS_READ) and marking it as read-only.
    @mcp_tool(
        annotations={
            "permissions": all_perms(Permission.DATA_ANALYTICS_READ),
            "readOnlyHint": True,
        }
    )
  • Input schema defined via Pydantic Field annotations for database_name and table_name. Output schema described in the function docstring, returning structured table metadata and columns.
    async def get_table_schema(
        database_name: Annotated[
            str,
            Field(
                description="The name of the database where the table is located",
                examples=["panther_logs.public"],
            ),
        ],
        table_name: Annotated[
            str,
            Field(
                description="The name of the table to get columns for",
                examples=["Panther.Audit"],
            ),
        ],
    ) -> Dict[str, Any]:
        """Get column details for a specific data lake table.
    
        IMPORTANT: This returns the table structure in Snowflake. For writing
        optimal queries, ALSO call get_panther_log_type_schema() to understand:
        - Nested object structures (only shown as 'object' type here)
        - Which fields map to p_any_* indicator columns
        - Array element structures
    
        Example workflow:
        1. get_panther_log_type_schema(["AWS.CloudTrail"]) - understand structure
        2. get_table_schema("panther_logs.public", "aws_cloudtrail") - get column names/types
        3. Write query using both: nested paths from log schema, column names from table schema
    
        Returns:
            Dict containing:
            - success: Boolean indicating if the query was successful
            - name: Table name
            - display_name: Table display name
            - description: Table description
            - log_type: Log type
            - columns: List of columns, each containing:
                - name: Column name
                - type: Column data type
                - description: Column description
            - message: Error message if unsuccessful
        """
        table_full_path = f"{database_name}.{table_name}"
        logger.info(f"Fetching column information for table: {table_full_path}")
    
        try:
            # Prepare input variables
            variables = {"databaseName": database_name, "tableName": table_name}
    
            logger.debug(f"Query variables: {variables}")
    
            # Execute the query using shared client
            result = await _execute_query(GET_COLUMNS_FOR_TABLE_QUERY, variables)
    
            # Get query data
            query_data = result.get("dataLakeDatabaseTable", {})
            columns = query_data.get("columns", [])
    
            if not columns:
                logger.warning(f"No columns found for table: {table_full_path}")
                return {
                    "success": False,
                    "message": f"No columns found for table: {table_full_path}",
                }
    
            logger.info(f"Successfully retrieved {len(columns)} columns")
    
            # Format the response
            return {
                "success": True,
                "status": "succeeded",
                **query_data,
                "stats": {
                    "table_count": len(columns),
                },
            }
        except Exception as e:
            logger.error(f"Failed to get columns for table: {str(e)}")
            return {
                "success": False,
                "message": f"Failed to get columns for table: {str(e)}",
            }
  • GraphQL query definition used by the handler to fetch table column details from the Panther GraphQL API.
    GET_COLUMNS_FOR_TABLE_QUERY = gql("""
    query GetColumnDetails($databaseName: String!, $tableName: String!) {
      dataLakeDatabaseTable(input: { databaseName: $databaseName, tableName: $tableName }) {
        name,
        displayName,
        description,
        logType,
        columns {
          name,
          type,
          description
        }
      }
    }
    """)
  • The mcp_tool decorator and register_all_tools function that handle collecting and registering all decorated tools with the MCP server instance.
    def mcp_tool(
        func: Optional[Callable] = None,
        *,
        name: Optional[str] = None,
        description: Optional[str] = None,
        annotations: Optional[Dict[str, Any]] = None,
    ) -> Callable:
        """
        Decorator to mark a function as an MCP tool.
    
        Functions decorated with this will be automatically registered
        when register_all_tools() is called.
    
        Can be used in two ways:
        1. Direct decoration:
            @mcp_tool
            def my_tool():
                ...
    
        2. With parameters:
            @mcp_tool(
                name="custom_name",
                description="Custom description",
                annotations={"category": "data_analysis"}
            )
            def my_tool():
                ...
    
        Args:
            func: The function to decorate
            name: Optional custom name for the tool. If not provided, uses the function name.
            description: Optional description of what the tool does. If not provided, uses the function's docstring.
            annotations: Optional dictionary of additional annotations for the tool.
        """
    
        def decorator(func: Callable) -> Callable:
            # Store metadata on the function
            func._mcp_tool_metadata = {
                "name": name,
                "description": description,
                "annotations": annotations,
            }
            _tool_registry.add(func)
    
            @wraps(func)
            def wrapper(*args, **kwargs):
                return func(*args, **kwargs)
    
            return wrapper
    
        # Handle both @mcp_tool and @mcp_tool(...) cases
        if func is None:
            return decorator
        return decorator(func)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations provide readOnlyHint=true, but the description adds valuable context beyond this: it specifies the return format in detail (including success flag, table metadata, columns list with name/type/description), mentions permissions requirement ('Permissions:{'all_of': ['Query Data Lake']}'), and explains limitations (e.g., nested structures only shown as 'object' type). No contradiction with annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (purpose, important notes, workflow example, return format, permissions). While slightly longer due to the detailed example, every sentence adds value—no wasted words. It could be slightly more concise but remains highly efficient.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (metadata retrieval with permissions), rich annotations (readOnlyHint), and detailed output schema (implied by the return format description), the description is complete. It covers purpose, usage guidelines, behavioral context, permissions, and integration with other tools, leaving no gaps for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing clear documentation for both parameters. The description doesn't add additional parameter semantics beyond what's in the schema, but it contextually explains how parameters fit into the workflow (e.g., using 'panther_logs.public' and 'aws_cloudtrail' in the example). Baseline 3 is appropriate given high schema coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get column details') and resource ('for a specific data lake table'), distinguishing it from sibling tools like 'list_database_tables' (which lists tables) or 'get_log_type_schema_details' (which provides log structure details). It precisely defines the tool's scope.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly provides when to use this tool ('to get column names/types') and when to use an alternative ('For writing optimal queries, ALSO call get_panther_log_type_schema()'), including a detailed example workflow. It clearly differentiates this tool's role from complementary tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/panther-labs/mcp-panther'

If you have feedback or need assistance with the MCP directory API, please join our Discord server