Skip to main content
Glama
oceanbase

mcp-oceanbase

Official
by oceanbase

get_ob_ash_report

Generate an OceanBase Active Session History (ASH) report to analyze system performance. Capture session details, SQL IDs, wait events, and module actions for precise diagnostics.

Instructions

Get OceanBase Active Session History report.
ASH can sample the status of all Active Sessions in the system at 1-second intervals, including:
    Current executing SQL ID
    Current wait events (if any)
    Wait time and wait parameters
    The module where the SESSION is located during sampling (PARSE, EXECUTE, PL, etc.)
    SESSION status records, such as SESSION MODULE, ACTION, CLIENT ID
This will be very useful when you perform performance analysis.RetryClaude can make mistakes. Please double-check responses.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
end_timeYes
start_timeYes
tenant_idNo

Implementation Reference

  • The handler function for the 'get_ob_ash_report' tool, decorated with @app.tool() which also registers it with the MCP server. It constructs and executes a SQL call to DBMS_WORKLOAD_REPOSITORY.ASH_REPORT using the provided start_time, end_time, and optional tenant_id, returning the result or error.
    @app.tool()
    def get_ob_ash_report(
        start_time: str,
        end_time: str,
        tenant_id: Optional[int] = None,
    ) -> str:
        """
        Get OceanBase Active Session History report.
        ASH can sample the status of all Active Sessions in the system at 1-second intervals, including:
            Current executing SQL ID
            Current wait events (if any)
            Wait time and wait parameters
            The module where the SESSION is located during sampling (PARSE, EXECUTE, PL, etc.)
            SESSION status records, such as SESSION MODULE, ACTION, CLIENT ID
        This will be very useful when you perform performance analysis.RetryClaude can make mistakes. Please double-check responses.
    
        Args:
            start_time: Sample Start Time,Format: yyyy-MM-dd HH:mm:ss.
            end_time: Sample End Time,Format: yyyy-MM-dd HH:mm:ss.
            tenant_id: Used to specify the tenant ID for generating the ASH Report. Leaving this field blank or setting it to NULL indicates no restriction on the TENANT_ID.
        """
        logger.info(
            f"Calling tool: get_ob_ash_report  with arguments: {start_time}, {end_time}, {tenant_id}"
        )
        # Construct the SQL query
        sql_query = f"""
            CALL DBMS_WORKLOAD_REPOSITORY.ASH_REPORT('{start_time}','{end_time}', NULL, NULL, NULL, 'TEXT', NULL, NULL, {tenant_id if tenant_id is not None else "NULL"});
        """
        try:
            return execute_sql(sql_query)
        except Error as e:
            logger.error(f"Error get ASH report,executing SQL '{sql_query}': {e}")
            return f"Error get ASH report,{str(e)}"
  • The @app.tool() decorator registers the get_ob_ash_report function as an MCP tool.
    @app.tool()
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden for behavioral disclosure. It describes what ASH samples (e.g., 1-second intervals, specific data points) and mentions usefulness for performance analysis, but lacks critical behavioral details: it doesn't specify if this is a read-only operation, potential performance impact, authentication requirements, rate limits, or error handling. For a tool with no annotation coverage, this leaves significant gaps in understanding how the tool behaves.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is moderately concise but includes unnecessary content: 'RetryClaude can make mistakes. Please double-check responses.' is irrelevant and should be removed. The core explanation is front-loaded with the purpose, but the structure could be improved by directly linking parameters to the described functionality. It's not overly verbose but has wasted sentences.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, no output schema, and 3 parameters, the description is incomplete. It explains what ASH is and its utility but misses key contextual elements: parameter meanings, return format, behavioral constraints, and differentiation from sibling tools. For a performance analysis tool with complex inputs, this leaves too much undefined for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the schema provides no parameter descriptions. The description mentions 'ASH can sample... at 1-second intervals' but doesn't explain the three parameters (start_time, end_time, tenant_id) or their roles in filtering the report. It fails to compensate for the schema gap, leaving parameters undocumented in both schema and description.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get OceanBase Active Session History report' with specific details about what ASH samples (SQL ID, wait events, wait time, module, SESSION status). It distinguishes this from siblings like execute_sql or search_oceanbase_document by focusing on performance analysis reports rather than direct SQL execution or documentation search. However, it doesn't explicitly contrast with get_all_server_nodes or get_resource_capacity which might also relate to performance monitoring.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context with 'This will be very useful when you perform performance analysis,' suggesting it's for performance troubleshooting scenarios. However, it doesn't provide explicit guidance on when to use this tool versus alternatives like get_resource_capacity or execute_sql for performance insights, nor does it mention prerequisites or exclusions. The guidance is helpful but not comprehensive.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/oceanbase/awesome-oceanbase-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server