Skip to main content
Glama

dba_resusageSummary

Analyze Teradata system resource usage by aggregating metrics across dimensions like time, workload type, complexity, and user to identify performance patterns and optimize database efficiency.

Instructions

Get the Teradata system usage summary metrics by weekday and hour for each workload type and query complexity bucket.

Arguments: dimensions - list of dimensions to aggregate the resource usage summary. All dimensions are: ["LogDate", "hourOfDay", "dayOfWeek", "workloadType", "workloadComplexity", "UserName", "AppId"] user_name - user name date - Date to analyze, formatted as YYYY-MM-DD dayOfWeek - day of the week to analyze hourOfDay - hour of day to analyze workloadType - workload type to analyze, example: 'LOAD', 'ETL/ELT', 'EXPORT', 'QUERY', 'ADMIN', 'OTHER' workloadComplexity - workload complexity to analyze, example: 'Ingest & Prep', 'Answers', 'System/Procedural' AppId - Application ID to analyze, example: 'TPTLOAD%', 'TPTUPD%', 'FASTLOAD%', 'MULTLOAD%', 'EXECUTOR%', 'JDBC%'

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
dimensionsNo
user_nameNo
dateNo
dayOfWeekNo
hourOfDayNo
workloadTypeNo
workloadComplexityNo
AppIdNo

Implementation Reference

  • The handler function that implements the core logic of the 'dba_resusageSummary' tool. It dynamically constructs and executes a complex SQL query against DBC.DBQLogTbl to provide resource usage summaries aggregated by user-specified dimensions (e.g., LogDate, hourOfDay, workloadType) and filters.
    def handle_dba_resusageSummary(conn: TeradataConnection, dimensions: list[str] | None = None, user_name: str | None = None, date: str | None = None, dayOfWeek: str | None = None, hourOfDay: str | None = None, workloadType: str | None = None, workloadComplexity: str | None = None, AppId: str | None = None, *args, **kwargs): """ Get the Teradata system usage summary metrics by weekday and hour for each workload type and query complexity bucket. Arguments: dimensions - list of dimensions to aggregate the resource usage summary. All dimensions are: ["LogDate", "hourOfDay", "dayOfWeek", "workloadType", "workloadComplexity", "UserName", "AppId"] user_name - user name date - Date to analyze, formatted as `YYYY-MM-DD` dayOfWeek - day of the week to analyze hourOfDay - hour of day to analyze workloadType - workload type to analyze, example: 'LOAD', 'ETL/ELT', 'EXPORT', 'QUERY', 'ADMIN', 'OTHER' workloadComplexity - workload complexity to analyze, example: 'Ingest & Prep', 'Answers', 'System/Procedural' AppId - Application ID to analyze, example: 'TPTLOAD%', 'TPTUPD%', 'FASTLOAD%', 'MULTLOAD%', 'EXECUTOR%', 'JDBC%' """ logger.debug(f"Tool: handle_dba_resusageSummary: Args: dimensions: {dimensions}") comment="Total system resource usage summary." # If dimensions is not None or empty, filter in the allowed dimensions allowed_dimensions = ["LogDate", "hourOfDay", "dayOfWeek", "workloadType", "workloadComplexity","UserName","AppId"] unsupported_dimensions = [] if dimensions is not None: unsupported_dimensions = [dim for dim in dimensions if dim not in allowed_dimensions] dimensions = [dim for dim in dimensions if dim in allowed_dimensions] else: dimensions=[] # Update comment string based on dimensions used and supported. if dimensions: comment+="Metrics aggregated by " + ", ".join(dimensions) + "." if unsupported_dimensions: comment+="The following dimensions are not supported and will be ignored: " + ", ".join(unsupported_dimensions) + "." # Dynamically construct the SELECT and GROUP BY clauses based on dimensions dim_string = ", ".join(dimensions) group_by_clause = ("GROUP BY " if dimensions else "")+dim_string dim_string += ("," if dimensions else "") filter_clause = "" filter_clause += f"AND UserName = '{user_name}' " if user_name else "" filter_clause += f"AND LogDate = '{date}' " if date else "" filter_clause += f"AND dayOfWeek = '{dayOfWeek}' " if dayOfWeek else "" filter_clause += f"AND hourOfDay = '{hourOfDay}' " if hourOfDay else "" filter_clause += f"AND workloadType = '{workloadType}' " if workloadType else "" filter_clause += f"AND workloadComplexity = '{workloadComplexity}' " if workloadComplexity else "" filter_clause += f"AND AppID LIKE '{AppId}' " if AppId else "" query = f""" SELECT {dim_string} COUNT(*) AS "Request Count", SUM(AMPCPUTime) AS "Total AMPCPUTime", SUM(TotalIOCount) AS "Total IOCount", SUM(ReqIOKB) AS "Total ReqIOKB", SUM(ReqPhysIO) AS "Total ReqPhysIO", SUM(ReqPhysIOKB) AS "Total ReqPhysIOKB", SUM(SumLogIO_GB) AS "Total ReqIO GB", SUM(SumPhysIO_GB) AS "Total ReqPhysIOGB", SUM(TotalServerByteCount) AS "Total Server Byte Count" FROM ( SELECT CAST(QryLog.Starttime as DATE) AS LogDate, EXTRACT(HOUR FROM StartTime) AS hourOfDay, CASE QryCal.day_of_week WHEN 1 THEN 'Sunday' WHEN 2 THEN 'Monday' WHEN 3 THEN 'Tuesday' WHEN 4 THEN 'Wednesday' WHEN 5 THEN 'Thursday' WHEN 6 THEN 'Friday' WHEN 7 THEN 'Saturday' END AS dayOfWeek, QryLog.UserName, QryLog.AcctString, QryLog.AppID , CASE WHEN QryLog.AppID LIKE ANY('TPTLOAD%', 'TPTUPD%', 'FASTLOAD%', 'MULTLOAD%', 'EXECUTOR%', 'JDBCL%') THEN 'LOAD' WHEN QryLog.StatementType IN ('Insert', 'Update', 'Delete', 'Create Table', 'Merge Into') AND QryLog.AppID NOT LIKE ANY('TPTLOAD%', 'TPTUPD%', 'FASTLOAD%', 'MULTLOAD%', 'EXECUTOR%', 'JDBCL%') THEN 'ETL/ELT' WHEN QryLog.StatementType = 'Select' AND (AppID IN ('TPTEXP', 'FASTEXP') OR AppID LIKE 'JDBCE%') THEN 'EXPORT' WHEN QryLog.StatementType = 'Select' AND QryLog.AppID NOT LIKE ANY('TPTLOAD%', 'TPTUPD%', 'FASTLOAD%', 'MULTLOAD%', 'EXECUTOR%', 'JDBCL%') THEN 'QUERY' WHEN QryLog.StatementType IN ('Dump Database', 'Unrecognized type', 'Release Lock', 'Collect Statistics') THEN 'ADMIN' ELSE 'OTHER' END AS workloadType, CASE WHEN StatementType = 'Merge Into' THEN 'Ingest & Prep' WHEN StatementType = 'Select' THEN 'Answers' ELSE 'System/Procedural' END AS workloadComplexity, QryLog.AMPCPUTime, QryLog.TotalIOCount, QryLog.ReqIOKB, QryLog.ReqPhysIO, QryLog.ReqPhysIOKB, QryLog.TotalServerByteCount, (QryLog.ReqIOKB / 1024 / 1024) AS SumLogIO_GB, (QryLog.ReqPhysIOKB / 1024 / 1024) AS SumPhysIO_GB FROM DBC.DBQLogTbl QryLog INNER JOIN Sys_Calendar.CALENDAR QryCal ON QryCal.calendar_date = CAST(QryLog.Starttime as DATE) WHERE CAST(QryLog.Starttime as DATE) BETWEEN CURRENT_DATE - 30 AND CURRENT_DATE AND StartTime IS NOT NULL {filter_clause} ) AS QryDetails {group_by_clause} """ logger.debug(f"Tool: handle_dba_resusageSummary: Query: {query}") with conn.cursor() as cur: logger.debug("Resource usage summary requested.") rows = cur.execute(query) data = rows_to_json(cur.description, rows.fetchall()) metadata = { "tool_name": "dba_resusageSummary", "total_rows": len(data) , "comment": comment, "rows": len(data) } logger.debug(f"Tool: handle_dba_resusageSummary: metadata: {metadata}") return create_response(data, metadata)
  • Dynamic registration of Python handler functions as MCP tools. Functions named 'handle_<tool_name>' are automatically discovered from loaded modules, wrapped, and registered with FastMCP using the tool name 'dba_resusageSummary'.
    all_functions = module_loader.get_all_functions() for name, func in all_functions.items(): if not (inspect.isfunction(func) and name.startswith("handle_")): continue tool_name = name[len("handle_"):] if not any(re.match(p, tool_name) for p in config.get('tool', [])): continue wrapped = make_tool_wrapper(func) mcp.tool(name=tool_name, description=wrapped.__doc__)(wrapped) logger.info(f"Created tool: {tool_name}")
  • Generates the input schema for MCP tools by inspecting the handler function's signature, removing internal params like 'conn', preserving type annotations and docstring for Pydantic-based validation and client forms.
    def make_tool_wrapper(func): """Create an MCP-facing wrapper for a handle_* function. - Removes internal parameters (conn, tool_name, fs_config) from the MCP signature while still injecting them into the underlying handler. - Preserves the handler's parameter names and types so MCP clients can render friendly forms. """ sig = inspect.signature(func) inject_kwargs = {} removable = {"conn", "tool_name"} if "fs_config" in sig.parameters: inject_kwargs["fs_config"] = fs_config removable.add("fs_config") params = [ p for name, p in sig.parameters.items() if name not in removable and p.kind in (inspect.Parameter.POSITIONAL_OR_KEYWORD, inspect.Parameter.KEYWORD_ONLY) ] new_sig = sig.replace(parameters=params) # Preserve annotations for Pydantic schema generation annotations = {} for name, p in sig.parameters.items(): if name in removable: continue if p.kind not in (inspect.Parameter.POSITIONAL_OR_KEYWORD, inspect.Parameter.KEYWORD_ONLY): continue if p.annotation is not inspect._empty: annotations[name] = p.annotation def _exec(*args, **kwargs): return execute_db_tool(func, **inject_kwargs, **kwargs) _exec.__name__ = getattr(func, "__name__", "wrapped_tool") _exec.__signature__ = new_sig _exec.__doc__ = func.__doc__ if annotations: _exec.__annotations__ = annotations return _exec

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/blitzstermayank/MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server