Skip to main content
Glama
panther-labs

Panther MCP Server

Official

list_databases

Read-only

Retrieve all available data lake databases in Panther's security monitoring platform to identify and access security log repositories for investigation.

Instructions

List all available datalake databases in Panther.

Returns: Dict containing: - success: Boolean indicating if the query was successful - databases: List of databases, each containing: - name: Database name - description: Database description - message: Error message if unsuccessful

Permissions:{'all_of': ['Query Data Lake']}

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The primary handler and registration for the 'list_databases' MCP tool. Decorated with @mcp_tool and implements the core logic by executing LIST_DATABASES_QUERY.
    @mcp_tool(
        annotations={
            "permissions": all_perms(Permission.DATA_ANALYTICS_READ),
            "readOnlyHint": True,
        }
    )
    async def list_databases() -> Dict[str, Any]:
        """List all available datalake databases in Panther.
    
        Returns:
            Dict containing:
            - success: Boolean indicating if the query was successful
            - databases: List of databases, each containing:
                - name: Database name
                - description: Database description
            - message: Error message if unsuccessful
        """
    
        logger.info("Fetching datalake databases")
    
        try:
            # Execute the query using shared client
            result = await _execute_query(LIST_DATABASES_QUERY, {})
    
            # Get query data
            databases = result.get("dataLakeDatabases", [])
    
            if not databases:
                logger.warning("No databases found")
                return {"success": False, "message": "No databases found"}
    
            logger.info(f"Successfully retrieved {len(databases)} results")
    
            # Format the response
            return {
                "success": True,
                "status": "succeeded",
                "databases": databases,
                "stats": {
                    "database_count": len(databases),
                },
            }
        except Exception as e:
            logger.error(f"Failed to fetch database results: {str(e)}")
            return {
                "success": False,
                "message": f"Failed to fetch database results: {str(e)}",
            }
  • GraphQL query definition used by list_databases tool to retrieve the list of available data lake databases.
    LIST_DATABASES_QUERY = gql("""
    query ListDatabases {
        dataLakeDatabases {
            name
            description
        }
    }
    """)
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The annotations provide readOnlyHint=true, indicating a safe read operation. The description adds value by specifying the return structure (Dict with success, databases, message) and permissions requirement ('Query Data Lake'), which are not covered by annotations. It doesn't mention rate limits or side effects, but with annotations covering safety, this is sufficient for good transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured and front-loaded, starting with the core purpose, followed by return details and permissions. Every sentence adds value: the first defines the action, the second explains the output format, and the third specifies permissions. There is no wasted text or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's simplicity (0 parameters, read-only, with output schema provided), the description is complete. It covers the purpose, output structure, and permissions, which are essential for an agent to use it correctly. The presence of an output schema means the description doesn't need to detail return values, and it adequately addresses the tool's context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 0 parameters with 100% coverage, so no parameter documentation is needed. The description appropriately doesn't discuss parameters, focusing instead on output and permissions. This meets the baseline of 4 for zero-parameter tools, as it avoids unnecessary repetition.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb ('List') and resource ('all available datalake databases in Panther'), making the purpose specific and understandable. However, it doesn't explicitly differentiate from sibling tools like 'list_database_tables' or 'list_data_models', which would require mentioning what this tool does NOT do (e.g., list tables within databases).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. While it mentions permissions ('Query Data Lake'), it doesn't specify use cases, prerequisites, or comparisons with related tools like 'list_database_tables' for table-level listing or 'query_data_lake' for querying data. This leaves the agent without context for tool selection.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/panther-labs/mcp-panther'

If you have feedback or need assistance with the MCP directory API, please join our Discord server