Skip to main content
Glama
Teradata

Teradata MCP Server

Official
by Teradata

base_tableUsage

Analyze table and view usage in Teradata databases to identify active database objects and their value through SQL execution tracking.

Instructions

Measure the usage of a table and views by users in a given schema, this is helpful to infer what database objects are most actively used or drive most value via SQLAlchemy, bind parameters if provided (prepared SQL), and return the fully rendered SQL (with literals) in metadata.

Arguments: database_name - Database name

Returns: ResponseType: formatted response with query results + metadata

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
database_nameNo

Implementation Reference

  • Handler function implementing the logic for the 'base_tableUsage' tool. It executes a SQL query against Teradata's DBQL tables to compute table usage metrics including query frequency, percentage of total queries, usage category (High/Medium/Low), and recency of queries for tables in the specified database.
    # Read table usage tool
    def handle_base_tableUsage(conn: TeradataConnection, database_name: str | None = None, *args, **kwargs):
        """
        Measure the usage of a table and views by users in a given schema, this is helpful to infer what database objects are most actively used or drive most value via SQLAlchemy, bind parameters if provided (prepared SQL), and return the fully rendered SQL (with literals) in metadata.
    
        Arguments:
          database_name - Database name
    
        Returns:
          ResponseType: formatted response with query results + metadata
        """
    
        logger.debug("Tool: handle_base_tableUsage: Args: database_name:")
        database_name_filter = f"AND objectdatabasename = '{database_name}'" if database_name else ""
    
        table_usage_sql="""
        LOCKING ROW for ACCESS
        sel
        DatabaseName
        ,TableName
        ,Weight as "QueryCount"
        ,100*"Weight" / sum("Weight") over(partition by 1) PercentTotal
        ,case
            when PercentTotal >=10 then 'High'
            when PercentTotal >=5 then 'Medium'
            else 'Low'
        end (char(6)) usage_freq
        ,FirstQueryDaysAgo
        ,LastQueryDaysAgo
    
        from
        (
            SELECT   TRIM(QTU1.TableName)  AS "TableName"
                    , TRIM(QTU1.DatabaseName)  AS "DatabaseName"
                    ,max((current_timestamp - CollectTimeStamp) day(4)) as "FirstQueryDaysAgo"
                    ,min((current_timestamp - CollectTimeStamp) day(4)) as "LastQueryDaysAgo"
                    , COUNT(DISTINCT QTU1.QueryID) as "Weight"
            FROM    (
                                SELECT   objectdatabasename AS DatabaseName
                                    , ObjectTableName AS TableName
                                    , QueryId
                                FROM DBC.DBQLObjTbl /* uncomment for DBC */
                                WHERE Objecttype in ('Tab', 'Viw')
                                {database_name_filter}
                                AND ObjectTableName IS NOT NULL
                                AND ObjectColumnName IS NULL
                                -- AND LogDate BETWEEN '2017-01-01' AND '2017-08-01' /* uncomment for PDCR */
                                --	AND LogDate BETWEEN current_date - 90 AND current_date - 1 /* uncomment for PDCR */
                                GROUP BY 1,2,3
                            ) AS QTU1
            INNER JOIN DBC.DBQLogTbl QU /* uncomment for DBC */
            ON QTU1.QueryID=QU.QueryID
            AND (QU.AMPCPUTime + QU.ParserCPUTime) > 0
    
            GROUP BY 1,2
        ) a
        order by PercentTotal desc
        qualify PercentTotal>0
        ;
    
        """
    
    
        with conn.cursor() as cur:
            rows = cur.execute(table_usage_sql.format(database_name_filter=database_name_filter))
            data = rows_to_json(cur.description, rows.fetchall())
        if len(data):
            info=f'This data contains the list of tables most frequently queried objects in database schema {database_name}'
        else:
            info=f'No tables have recently been queried in the database schema {database_name}.'
        metadata = {
            "tool_name": "handle_base_tableUsage",
            "database": database_name,
            "table_count": len(data),
            "comment": info
        }
        logger.debug(f"Tool: handle_base_tableUsage: metadata: {metadata}")
        return create_response(data, metadata)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions SQLAlchemy usage, bind parameters, and returning rendered SQL in metadata, which adds some behavioral context. However, it lacks critical details: whether this is a read-only operation, performance impact, authentication needs, rate limits, or what the 'formatted response' actually contains. For a tool that likely queries system tables/views, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is reasonably concise (two sentences plus arguments/returns). However, it's not optimally structured: the first sentence is overloaded with multiple concepts (measuring usage, inferring value, SQLAlchemy, bind parameters, metadata). Arguments and returns are listed but lack elaboration. Some redundancy exists (e.g., 'bind parameters if provided' could be integrated more smoothly).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, no output schema, and moderate complexity (usage analytics), the description is incomplete. It omits key contextual details: output format beyond 'formatted response', error conditions, whether it aggregates across all schemas if database_name is null, how 'usage' is defined (queries, time, users), or dependencies. Siblings like dba_tableUsageImpact suggest richer alternatives, but no comparison is made.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It only mentions 'database_name' generically in the arguments list without explaining its role (e.g., optional default schema? all schemas if null?). No details on what 'schema' means in context or how usage is measured (time range, metrics). The description fails to add meaningful semantics beyond the bare parameter name.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Measure the usage of a table and views by users in a given schema' with the specific goal of inferring active/value-driving objects. It distinguishes from siblings like base_tableList (lists tables) or base_tablePreview (shows data) by focusing on usage metrics. However, it doesn't explicitly differentiate from dba_tableUsageImpact, which might be a closer sibling.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides minimal usage guidance. It mentions the tool is 'helpful to infer what database objects are most actively used or drive most value', but gives no explicit when-to-use vs. alternatives (e.g., vs. dba_tableUsageImpact or dba_featureUsage). No prerequisites, exclusions, or comparative context are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Teradata/teradata-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server