Skip to main content
Glama

sql_Retrieve_Cluster_Queries

Extract SQL queries and performance metrics from Teradata clusters to analyze query patterns, identify optimization opportunities, and correlate performance issues with specific SQL structures.

Instructions

RETRIEVE ACTUAL SQL QUERIES FROM SPECIFIC CLUSTERS FOR PATTERN ANALYSIS

This tool extracts the actual SQL query text and performance metrics from selected clusters, enabling detailed pattern analysis and specific optimization recommendations. Essential for moving from cluster-level analysis to actual query optimization.

DETAILED ANALYSIS CAPABILITIES:

  • SQL Pattern Recognition: Analyze actual query structures, joins, predicates, and functions

  • Performance Correlation: Connect query patterns to specific performance characteristics

  • Optimization Identification: Identify common anti-patterns, missing indexes, inefficient joins

  • Code Quality Assessment: Evaluate query construction, complexity, and best practices

  • Workload Understanding: See actual business logic and data access patterns

QUERY SELECTION STRATEGIES:

  • By CPU Impact: Sort by 'ampcputime' to focus on highest CPU consumers

  • By I/O Volume: Sort by 'logicalio' to find scan-intensive queries

  • By Skew Problems: Sort by 'cpuskw' or 'ioskw' for distribution issues

  • By Complexity: Sort by 'numsteps' for complex execution plans

  • By Response Time: Sort by 'response_secs' for user experience impact

AVAILABLE METRICS FOR SORTING:

  • ampcputime: Total CPU seconds (primary optimization target)

  • logicalio: Total logical I/O operations (scan indicator)

  • cpuskw: CPU skew ratio (distribution problems)

  • ioskw: I/O skew ratio (hot spot indicators)

  • pji: Physical-to-Logical I/O ratio (compute intensity)

  • uii: Unit I/O Intensity (I/O efficiency)

  • numsteps: Query execution plan steps (complexity)

  • response_secs: Wall-clock execution time (user impact)

  • delaytime: Time spent in queue (concurrency issues)

AUTOMATIC PERFORMANCE CATEGORIZATION: Each query is categorized using configurable thresholds (from sql_opt_config.yml):

  • CPU Categories: VERY_HIGH_CPU (>config.very_high), HIGH_CPU (>config.high), MEDIUM_CPU (>10s), LOW_CPU

  • CPU Skew: SEVERE_CPU_SKEW (>config.severe), HIGH_CPU_SKEW (>config.high), MODERATE_CPU_SKEW (>config.moderate), NORMAL

  • I/O Skew: SEVERE_IO_SKEW (>config.severe), HIGH_IO_SKEW (>config.high), MODERATE_IO_SKEW (>config.moderate), NORMAL

Use thresholds set in config file for, CPU - high, very_high, Skew moderate, high, severe

TYPICAL OPTIMIZATION WORKFLOW:

  1. Start with clusters identified from sql_Analyze_Cluster_Stats

  2. Retrieve top queries by impact metric (usually 'ampcputime')

  3. Analyze SQL patterns for common issues:

    • Missing WHERE clauses or inefficient predicates

    • Cartesian products or missing JOIN conditions

    • Inefficient GROUP BY or ORDER BY operations

    • Suboptimal table access patterns

    • Missing or outdated statistics

  4. Develop specific optimization recommendations

QUERY LIMIT STRATEGY:

  • Use the query limit set in config file for pattern recognition and analysis, unless user specifies a different limit

OUTPUT INCLUDES:

  • Complete SQL query text for each query

  • All performance metrics, user, application, and workload context, cluster membership and rankings

  • Performance categories for quick filtering

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cluster_idsYes
metricNoampcputime
limit_per_clusterNo

Implementation Reference

  • The main handler function for the 'sql_Retrieve_Cluster_Queries' tool. It takes database connection, cluster IDs, sorting metric, and limit; executes SQL to retrieve top queries from clusters with performance metrics and categorizations; formats results as JSON with metadata.
    def handle_sql_Retrieve_Cluster_Queries( conn, cluster_ids: List[int], metric: str = "ampcputime", limit_per_cluster: int = 250, *args, **kwargs ): """ **RETRIEVE ACTUAL SQL QUERIES FROM SPECIFIC CLUSTERS FOR PATTERN ANALYSIS** This tool extracts the actual SQL query text and performance metrics from selected clusters, enabling detailed pattern analysis and specific optimization recommendations. Essential for moving from cluster-level analysis to actual query optimization. **DETAILED ANALYSIS CAPABILITIES:** - **SQL Pattern Recognition**: Analyze actual query structures, joins, predicates, and functions - **Performance Correlation**: Connect query patterns to specific performance characteristics - **Optimization Identification**: Identify common anti-patterns, missing indexes, inefficient joins - **Code Quality Assessment**: Evaluate query construction, complexity, and best practices - **Workload Understanding**: See actual business logic and data access patterns **QUERY SELECTION STRATEGIES:** - **By CPU Impact**: Sort by 'ampcputime' to focus on highest CPU consumers - **By I/O Volume**: Sort by 'logicalio' to find scan-intensive queries - **By Skew Problems**: Sort by 'cpuskw' or 'ioskw' for distribution issues - **By Complexity**: Sort by 'numsteps' for complex execution plans - **By Response Time**: Sort by 'response_secs' for user experience impact **AVAILABLE METRICS FOR SORTING:** - **ampcputime**: Total CPU seconds (primary optimization target) - **logicalio**: Total logical I/O operations (scan indicator) - **cpuskw**: CPU skew ratio (distribution problems) - **ioskw**: I/O skew ratio (hot spot indicators) - **pji**: Physical-to-Logical I/O ratio (compute intensity) - **uii**: Unit I/O Intensity (I/O efficiency) - **numsteps**: Query execution plan steps (complexity) - **response_secs**: Wall-clock execution time (user impact) - **delaytime**: Time spent in queue (concurrency issues) **AUTOMATIC PERFORMANCE CATEGORIZATION:** Each query is categorized using configurable thresholds (from sql_opt_config.yml): - **CPU Categories**: VERY_HIGH_CPU (>config.very_high), HIGH_CPU (>config.high), MEDIUM_CPU (>10s), LOW_CPU - **CPU Skew**: SEVERE_CPU_SKEW (>config.severe), HIGH_CPU_SKEW (>config.high), MODERATE_CPU_SKEW (>config.moderate), NORMAL - **I/O Skew**: SEVERE_IO_SKEW (>config.severe), HIGH_IO_SKEW (>config.high), MODERATE_IO_SKEW (>config.moderate), NORMAL Use thresholds set in config file for, CPU - high, very_high, Skew moderate, high, severe **TYPICAL OPTIMIZATION WORKFLOW:** 1. Start with clusters identified from sql_Analyze_Cluster_Stats 2. Retrieve top queries by impact metric (usually 'ampcputime') 3. Analyze SQL patterns for common issues: - Missing WHERE clauses or inefficient predicates - Cartesian products or missing JOIN conditions - Inefficient GROUP BY or ORDER BY operations - Suboptimal table access patterns - Missing or outdated statistics 4. Develop specific optimization recommendations **QUERY LIMIT STRATEGY:** - Use the query limit set in config file for pattern recognition and analysis, unless user specifies a different limit **OUTPUT INCLUDES:** - Complete SQL query text for each query - All performance metrics, user, application, and workload context, cluster membership and rankings - Performance categories for quick filtering """ config = SQL_CLUSTERING_CONFIG logger.debug(f"handle_sql_Retrieve_Cluster_Queries: clusters={cluster_ids}, metric={metric}, limit={limit_per_cluster}") feature_db = config['databases']['feature_db'] clusters_table = config['tables']['sql_query_clusters'] # Validate metric valid_metrics = [ 'ampcputime', 'logicalio', 'cpuskw', 'ioskw', 'pji', 'uii', 'numsteps', 'response_secs', 'delaytime' ] if metric not in valid_metrics: metric = 'ampcputime' # Default fallback # Convert cluster_ids list to comma-separated string for SQL IN clause cluster_ids_str = ','.join(map(str, cluster_ids)) with conn.cursor() as cur: # Get thresholds from config thresholds = config.get('performance_thresholds', {}) cpu_high = thresholds.get('cpu', {}).get('high', 100) cpu_very_high = thresholds.get('cpu', {}).get('very_high', 1000) skew_moderate = thresholds.get('skew', {}).get('moderate', 2.0) skew_high = thresholds.get('skew', {}).get('high', 3.0) skew_severe = thresholds.get('skew', {}).get('severe', 5.0) retrieve_queries_sql = f""" SELECT td_clusterid_kmeans, id, txt, username, appid, numsteps, ampcputime, logicalio, wdname, cpuskw, ioskw, pji, uii, response_secs, response_mins, delaytime, silhouette_score, -- Ranking within cluster by selected metric ROW_NUMBER() OVER (PARTITION BY td_clusterid_kmeans ORDER BY {metric} DESC) AS rank_in_cluster, -- Overall ranking across all selected clusters ROW_NUMBER() OVER (ORDER BY {metric} DESC) AS overall_rank, -- Performance categorization with configurable thresholds CASE WHEN ampcputime > {cpu_very_high} THEN 'VERY_HIGH_CPU' WHEN ampcputime > {cpu_high} THEN 'HIGH_CPU' WHEN ampcputime > 10 THEN 'MEDIUM_CPU' ELSE 'LOW_CPU' END AS cpu_category, CASE WHEN cpuskw > {skew_severe} THEN 'SEVERE_CPU_SKEW' WHEN cpuskw > {skew_high} THEN 'HIGH_CPU_SKEW' WHEN cpuskw > {skew_moderate} THEN 'MODERATE_CPU_SKEW' ELSE 'NORMAL_CPU_SKEW' END AS cpu_skew_category, CASE WHEN ioskw > {skew_severe} THEN 'SEVERE_IO_SKEW' WHEN ioskw > {skew_high} THEN 'HIGH_IO_SKEW' WHEN ioskw > {skew_moderate} THEN 'MODERATE_IO_SKEW' ELSE 'NORMAL_IO_SKEW' END AS io_skew_category FROM {feature_db}.{clusters_table} WHERE td_clusterid_kmeans IN ({cluster_ids_str}) QUALIFY ROW_NUMBER() OVER (PARTITION BY td_clusterid_kmeans ORDER BY {metric} DESC) <= {limit_per_cluster} ORDER BY td_clusterid_kmeans, {metric} DESC """ cur.execute(retrieve_queries_sql) data = rows_to_json(cur.description, cur.fetchall()) # Get summary by cluster cur.execute(f""" SELECT td_clusterid_kmeans, COUNT(*) AS queries_retrieved, AVG({metric}) AS avg_metric_value, MAX({metric}) AS max_metric_value, MIN({metric}) AS min_metric_value FROM {feature_db}.{clusters_table} WHERE td_clusterid_kmeans IN ({cluster_ids_str}) GROUP BY td_clusterid_kmeans ORDER BY td_clusterid_kmeans """) cluster_summary = rows_to_json(cur.description, cur.fetchall()) logger.debug(f"Retrieved {len(data)} queries from {len(cluster_ids)} clusters") # Return results with metadata metadata = { "tool_name": "sql_Retrieve_Cluster_Queries", "retrieval_parameters": { "cluster_ids": cluster_ids, "sort_metric": metric, "limit_per_cluster": limit_per_cluster, "valid_metrics": valid_metrics }, "cluster_summary": cluster_summary, "queries_retrieved": len(data), "table_source": f"{feature_db}.{clusters_table}", "analysis_ready": True, "description": f"Retrieved top {limit_per_cluster} queries per cluster sorted by {metric} - ready for pattern analysis and optimization recommendations" } return create_response(data, metadata)
  • Detailed docstring serving as input/output schema documentation for the tool, describing parameters (cluster_ids: List[int], metric: str, limit_per_cluster: int), available metrics, expected behavior, categorization logic, and output structure.
    """ **RETRIEVE ACTUAL SQL QUERIES FROM SPECIFIC CLUSTERS FOR PATTERN ANALYSIS** This tool extracts the actual SQL query text and performance metrics from selected clusters, enabling detailed pattern analysis and specific optimization recommendations. Essential for moving from cluster-level analysis to actual query optimization. **DETAILED ANALYSIS CAPABILITIES:** - **SQL Pattern Recognition**: Analyze actual query structures, joins, predicates, and functions - **Performance Correlation**: Connect query patterns to specific performance characteristics - **Optimization Identification**: Identify common anti-patterns, missing indexes, inefficient joins - **Code Quality Assessment**: Evaluate query construction, complexity, and best practices - **Workload Understanding**: See actual business logic and data access patterns **QUERY SELECTION STRATEGIES:** - **By CPU Impact**: Sort by 'ampcputime' to focus on highest CPU consumers - **By I/O Volume**: Sort by 'logicalio' to find scan-intensive queries - **By Skew Problems**: Sort by 'cpuskw' or 'ioskw' for distribution issues - **By Complexity**: Sort by 'numsteps' for complex execution plans - **By Response Time**: Sort by 'response_secs' for user experience impact **AVAILABLE METRICS FOR SORTING:** - **ampcputime**: Total CPU seconds (primary optimization target) - **logicalio**: Total logical I/O operations (scan indicator) - **cpuskw**: CPU skew ratio (distribution problems) - **ioskw**: I/O skew ratio (hot spot indicators) - **pji**: Physical-to-Logical I/O ratio (compute intensity) - **uii**: Unit I/O Intensity (I/O efficiency) - **numsteps**: Query execution plan steps (complexity) - **response_secs**: Wall-clock execution time (user impact) - **delaytime**: Time spent in queue (concurrency issues) **AUTOMATIC PERFORMANCE CATEGORIZATION:** Each query is categorized using configurable thresholds (from sql_opt_config.yml): - **CPU Categories**: VERY_HIGH_CPU (>config.very_high), HIGH_CPU (>config.high), MEDIUM_CPU (>10s), LOW_CPU - **CPU Skew**: SEVERE_CPU_SKEW (>config.severe), HIGH_CPU_SKEW (>config.high), MODERATE_CPU_SKEW (>config.moderate), NORMAL - **I/O Skew**: SEVERE_IO_SKEW (>config.severe), HIGH_IO_SKEW (>config.high), MODERATE_IO_SKEW (>config.moderate), NORMAL Use thresholds set in config file for, CPU - high, very_high, Skew moderate, high, severe **TYPICAL OPTIMIZATION WORKFLOW:** 1. Start with clusters identified from sql_Analyze_Cluster_Stats 2. Retrieve top queries by impact metric (usually 'ampcputime') 3. Analyze SQL patterns for common issues: - Missing WHERE clauses or inefficient predicates - Cartesian products or missing JOIN conditions - Inefficient GROUP BY or ORDER BY operations - Suboptimal table access patterns - Missing or outdated statistics 4. Develop specific optimization recommendations **QUERY LIMIT STRATEGY:** - Use the query limit set in config file for pattern recognition and analysis, unless user specifies a different limit **OUTPUT INCLUDES:** - Complete SQL query text for each query - All performance metrics, user, application, and workload context, cluster membership and rankings - Performance categories for quick filtering """
  • Package __init__.py that imports sql_opt_tools, making the handler available for auto-discovery by the module_loader which scans functions in loaded modules for tool registration.
    from .sql_opt_resources import * from .sql_opt_tools import *
  • Utility function used by the handler to format the tool response as standardized JSON with optional metadata.
    def create_response(data: Any, metadata: dict[str, Any] | None = None) -> str: """Create a standardized JSON response structure""" if metadata: response = { "status": "success", "metadata": metadata, "results": data } else: response = { "status": "success", "results": data } return json.dumps(response, default=serialize_teradata_types)

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/blitzstermayank/MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server