Skip to main content
Glama
aliyun

Alibaba Cloud RDS OpenAPI MCP Server

Official
by aliyun

describe_db_instance_performance

Read-only

Query performance metrics like CPU usage, QPS/TPS, sessions, and throughput for Alibaba Cloud RDS database instances to monitor and analyze database health.

Instructions

Queries the performance data of an instance using the RDS OpenAPI.
This method provides performance data collected from the RDS service, such as MemCpuUsage, QPSTPS, Sessions, ThreadStatus, MBPS, etc.

Args:
    region_id: db instance region(e.g. cn-hangzhou)
    db_instance_id: db instance id(e.g. rm-xxx)
    db_type: the db instance database type(e.g. mysql,pgsql,sqlserver)
    perf_keys: Performance Key  (e.g. ["MemCpuUsage", "QPSTPS", "Sessions", "COMDML", "RowDML", "ThreadStatus", "MBPS", "DetailedSpaceUsage"])
    start_time: start time(e.g. 2023-01-01 00:00)
    end_time: end time(e.g. 2023-01-01 00:00)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
region_idYes
db_instance_idYes
db_typeYes
perf_keysYes
start_timeYes
end_timeYes

Implementation Reference

  • The main handler function for the 'describe_db_instance_performance' tool. It queries performance metrics for an RDS DB instance using the Alibaba Cloud RDS DescribeDBInstancePerformance API. Transforms input perf_keys based on db_type using transform_perf_key, calls the SDK, compresses data if too many points, and formats the output.
    @mcp.tool(annotations=READ_ONLY_TOOL)
    async def describe_db_instance_performance(region_id: str,
                                               db_instance_id: str,
                                               db_type: str,
                                               perf_keys: list[str],
                                               start_time: str,
                                               end_time: str):
        """
        Queries the performance data of an instance using the RDS OpenAPI.
        This method provides performance data collected from the RDS service, such as MemCpuUsage, QPSTPS, Sessions, ThreadStatus, MBPS, etc.
        
        Args:
            region_id: db instance region(e.g. cn-hangzhou)
            db_instance_id: db instance id(e.g. rm-xxx)
            db_type: the db instance database type(e.g. mysql,pgsql,sqlserver)
            perf_keys: Performance Key  (e.g. ["MemCpuUsage", "QPSTPS", "Sessions", "COMDML", "RowDML", "ThreadStatus", "MBPS", "DetailedSpaceUsage"])
            start_time: start time(e.g. 2023-01-01 00:00)
            end_time: end time(e.g. 2023-01-01 00:00)
        """
    
        def _compress_performance(performance_value, max_items=10):
            if len(performance_value) > max_items:
                result = []
                offset = len(performance_value) / 10
                for i in range(0, len(performance_value), int(offset)):
                    _item = None
                    for j in range(i, min(i + int(offset), len(performance_value))):
                        if _item is None or sum([float(v) for v in performance_value[j].value.split('&')]) > sum(
                                [float(v) for v in _item.value.split('&')]):
                            _item = performance_value[j]
                    
                    _item.date = parse_iso_8601(_item.date)
                    result.append(_item)
                return result
            else:
                for item in performance_value:
                    item.date = parse_iso_8601(item.date)
                return performance_value
    
        try:
            start_time = transform_to_datetime(start_time)
            end_time = transform_to_datetime(end_time)
            client = get_rds_client(region_id)
            perf_key = transform_perf_key(db_type, perf_keys)
            if not perf_key:
                raise OpenAPIError(f"Unsupported perf_key: {perf_key}")
            request = rds_20140815_models.DescribeDBInstancePerformanceRequest(
                dbinstance_id=db_instance_id,
                start_time=transform_to_iso_8601(start_time, "minutes"),
                end_time=transform_to_iso_8601(end_time, "minutes"),
                key=",".join(perf_key)
            )
            response = client.describe_dbinstance_performance(request)
            responses = []
            for perf_key in response.body.performance_keys.performance_key:
                perf_key_info = f"""Key={perf_key.key}; Unit={perf_key.unit}; ValueFormat={perf_key.value_format}; Values={json_array_to_csv(_compress_performance(perf_key.values.performance_value))}"""
                responses.append(perf_key_info)
            return responses
        except Exception as e:
            raise e
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

The description adds context beyond annotations: it specifies the data source ('RDS OpenAPI') and lists example performance metrics. Annotations provide readOnlyHint=true, indicating a safe read operation, which the description aligns with by using 'queries'. However, it doesn't disclose behavioral traits like rate limits, authentication needs, or data freshness. With annotations covering safety, the description adds some value but not rich behavioral details.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is structured with a purpose statement followed by an Args section, but it's somewhat verbose with repetitive examples. Sentences like 'This method provides performance data collected from the RDS service' could be more concise. It's front-loaded with the main purpose, but the parameter explanations are lengthy. Overall, it's adequate but could be tighter.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (6 required parameters, no output schema, annotations only cover read-only hint), the description is moderately complete. It explains parameters well but lacks output details, error handling, or usage context. For a performance query tool, it should ideally mention data granularity or time range limits. It's sufficient for basic use but has gaps for full agent guidance.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds significant meaning beyond the input schema, which has 0% description coverage. It explains each parameter with examples: region_id as 'db instance region (e.g. cn-hangzhou)', db_type as 'database type (e.g. mysql,pgsql,sqlserver)', and perf_keys with a list of specific keys. This compensates for the schema's lack of descriptions, though it doesn't cover all nuances like time format constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Queries the performance data of an instance using the RDS OpenAPI' with specific examples of metrics like MemCpuUsage and QPSTPS. It distinguishes from siblings like describe_db_instance_attribute or describe_monitor_metrics by focusing on performance data, though it doesn't explicitly name alternatives. The verb 'queries' and resource 'performance data of an instance' are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It mentions the tool queries performance data but doesn't specify scenarios, prerequisites, or exclusions. Given siblings like describe_monitor_metrics, there's no help for an agent in choosing between them. Usage is implied only by the tool's name and description, with no explicit context.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aliyun/alibabacloud-rds-openapi-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server