Skip to main content
Glama
dstreefkerk

ms-sentinel-mcp-server

by dstreefkerk

sentinel_logs_tables_list

List available tables in Microsoft Sentinel Log Analytics workspaces to identify data sources for querying and analysis.

Instructions

List available tables in the Log Analytics workspace

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
kwargsYes

Implementation Reference

  • ListTablesTool class defines the 'sentinel_logs_tables_list' tool. Includes name, description, and the full async run() method that implements the core logic: extracts parameters, gets logs client, checks cache, executes KQL queries to list tables (with optional stats and filtering), processes results, handles errors, and caches output.
    class ListTablesTool(MCPToolBase):
        """
        Tool to list available tables in the Log Analytics workspace.
    
        Parameters:
            filter_pattern (str, optional): Pattern to filter table names
            include_stats (bool, optional): Include row counts and last updated times (default: False)
                                           WARNING: Setting to True can be slow in large environments
    
        Returns:
            dict: {
                'found': int,            # Number of tables found
                'tables': list,          # List of table metadata dicts
                                        # If include_stats=False: [{'name': str}]
                                        # If include_stats=True: [{'name': str, 'lastUpdated': str, 'rowCount': int}]
                'error': str (optional)  # Error message if applicable
            }
        """
    
        name = "sentinel_logs_tables_list"
        description = "List available tables in the Log Analytics workspace"
    
        async def run(self, ctx: Context, **kwargs):
            """
            List available tables in the Log Analytics workspace.
    
            Args:
                ctx (Context): The MCP tool context.
                **kwargs: Optional filter_pattern to filter table names.
                         Optional include_stats (bool) to include row counts and last updated times.
    
            Returns:
                dict: Results as described in the class docstring.
            """
            filter_pattern = self._extract_param(kwargs, "filter_pattern", "")
            include_stats = self._extract_param(kwargs, "include_stats", False)
    
            logs_client, workspace_id = self.get_logs_client_and_workspace(ctx)
            cache_key = f"tables_json:{workspace_id}:{filter_pattern}:{include_stats}"
            cached = cache.get(cache_key)
            if cached:
                return cached
            if logs_client is None:
                result = {
                    "error": (
                        "Azure Logs client is not initialized. "
                        "Check your credentials and configuration."
                    )
                }
                cache.set(cache_key, result)
                return result
            try:
                # Simple query for table names only (fast)
                if not include_stats:
                    kql_table_names = (
                        "search *\n"
                        "| distinct $table\n"
                        "| project TableName = $table\n"
                        "| order by TableName asc"
                    )
                    if filter_pattern:
                        kql_table_names = (
                            "search *\n"
                            "| distinct $table\n"
                            "| project TableName = $table\n"
                            f'| where TableName contains "{filter_pattern}"\n'
                            "| order by TableName asc"
                        )
                    query = kql_table_names
                    timespan = timedelta(days=1)  # Minimal timespan for fast query
                else:
                    # Full query with stats (expensive)
                    kql_table_info = (
                        "search *\n"
                        "| distinct $table\n"
                        "| extend TableName = $table\n"
                        "| project-away $table\n"
                        "| join kind=leftouter (\n"
                        "    union withsource=TableSource *\n"
                        "    | summarize LastUpdate=max(TimeGenerated),\n"
                        "      RowCount=count() by TableSource\n"
                        "    | project TableSource, LastUpdate, RowCount\n"
                        ") on $left.TableName == $right.TableSource\n"
                        "| project name=TableName, lastUpdated=LastUpdate, "
                        "rowCount=RowCount\n"
                        "| order by name asc"
                    )
                    if filter_pattern:
                        kql_table_info = (
                            "search *\n"
                            "| distinct $table\n"
                            "| extend TableName = $table\n"
                            "| project-away $table\n"
                            "| join kind=leftouter (\n"
                            "    union withsource=TableSource *\n"
                            "    | summarize LastUpdate=max(TimeGenerated),\n"
                            "      RowCount=count() by TableSource\n"
                            "    | project TableSource, LastUpdate, RowCount\n"
                            ") on $left.TableName == $right.TableSource\n"
                            "| project name=TableName, lastUpdated=LastUpdate, "
                            "rowCount=RowCount\n"
                            f'| where name contains "{filter_pattern}"\n'
                            "| order by name asc"
                        )
                    query = kql_table_info
                    timespan = timedelta(days=30)  # Reduced from 90 days for better performance
    
                response = await run_in_thread(
                    logs_client.query_workspace,
                    workspace_id=workspace_id,
                    query=query,
                    timespan=timespan,
                    name="list_tables_info" if include_stats else "list_table_names",
                )
                if response and response.tables and len(response.tables[0].rows) > 0:
                    tables = []
                    for row in response.tables[0].rows:
                        if not include_stats:
                            # Simple mode: only table names
                            table = {"name": row[0]}
                        else:
                            # Full mode: include stats
                            table = {"name": row[0], "lastUpdated": row[1], "rowCount": row[2]}
                        tables.append(table)
                    result = {"found": len(tables), "tables": tables}
                    cache.set(cache_key, result)
                    return result
                result = {
                    "found": 0,
                    "tables": [],
                    "error": (
                        "No tables found. The workspace may be empty "
                        "or you may not have access to the data."
                    ),
                }
                cache.set(cache_key, result)
                return result
            except TimeoutError:
                error_msg = (
                    "Query timed out. The workspace may have too many tables or too much data. "
                    "Try using include_stats=False for faster results, or use a filter_pattern to reduce the scope."
                )
                result = {"error": error_msg}
                self.logger.error("Query timeout in list tables: %s", error_msg)
                cache.set(cache_key, result)
                return result
            except Exception as e:
                result = {"error": "Failed to list tables: %s" % str(e)}
                self.logger.error("Failed to list tables: %s", str(e))
                cache.set(cache_key, result)
                return result
  • Class docstring provides the tool schema: input parameters filter_pattern (str optional), include_stats (bool optional); output dict with 'found', 'tables' list (with optional stats), and optional 'error'.
    """
    Tool to list available tables in the Log Analytics workspace.
    
    Parameters:
        filter_pattern (str, optional): Pattern to filter table names
        include_stats (bool, optional): Include row counts and last updated times (default: False)
                                       WARNING: Setting to True can be slow in large environments
    
    Returns:
        dict: {
            'found': int,            # Number of tables found
            'tables': list,          # List of table metadata dicts
                                    # If include_stats=False: [{'name': str}]
                                    # If include_stats=True: [{'name': str, 'lastUpdated': str, 'rowCount': int}]
            'error': str (optional)  # Error message if applicable
        }
    """
  • register_tools(mcp) function registers the ListTablesTool (along with other table tools) by calling ListTablesTool.register(mcp). This is loaded dynamically by server.py.
    def register_tools(mcp):
        """
        Register all table tools with the given MCP instance.
    
        Args:
            mcp: The MCP instance to register tools with.
        """
        ListTablesTool.register(mcp)
        GetTableSchemaTool.register(mcp)
        GetTableDetailsTool.register(mcp)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool lists tables but doesn't explain what 'available' means, whether it requires authentication, if there are rate limits, or what the output format looks like. This is a significant gap for a tool with zero annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that directly states the tool's purpose without unnecessary words. It's appropriately sized and front-loaded, making it easy to understand at a glance.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a tool with no annotations, no output schema, and 1 undocumented parameter, the description is incomplete. It explains what the tool does at a high level but lacks essential details about behavior, parameters, and output that would help an agent use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 1 parameter ('kwargs') with 0% description coverage, and the tool description provides no information about parameters. The description doesn't mention any parameters at all, failing to compensate for the lack of schema documentation.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List') and resource ('available tables in the Log Analytics workspace'), providing a specific verb+resource combination. However, it doesn't explicitly differentiate from sibling tools like 'sentinel_logs_table_details_get' or 'sentinel_logs_table_schema_get', which prevents a perfect score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. There's no mention of prerequisites, context, or comparison to sibling tools like 'sentinel_logs_search' or 'sentinel_logs_table_details_get', leaving the agent with no usage direction.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/dstreefkerk/ms-sentinel-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server