sql_Retrieve_Cluster_Queries
Extract SQL queries and performance metrics from Teradata clusters to identify optimization patterns, analyze query structures, and detect performance issues for database tuning.
Instructions
RETRIEVE ACTUAL SQL QUERIES FROM SPECIFIC CLUSTERS FOR PATTERN ANALYSIS
This tool extracts the actual SQL query text and performance metrics from selected clusters, enabling detailed pattern analysis and specific optimization recommendations. Essential for moving from cluster-level analysis to actual query optimization.
DETAILED ANALYSIS CAPABILITIES:
SQL Pattern Recognition: Analyze actual query structures, joins, predicates, and functions
Performance Correlation: Connect query patterns to specific performance characteristics
Optimization Identification: Identify common anti-patterns, missing indexes, inefficient joins
Code Quality Assessment: Evaluate query construction, complexity, and best practices
Workload Understanding: See actual business logic and data access patterns
QUERY SELECTION STRATEGIES:
By CPU Impact: Sort by 'ampcputime' to focus on highest CPU consumers
By I/O Volume: Sort by 'logicalio' to find scan-intensive queries
By Skew Problems: Sort by 'cpuskw' or 'ioskw' for distribution issues
By Complexity: Sort by 'numsteps' for complex execution plans
By Response Time: Sort by 'response_secs' for user experience impact
AVAILABLE METRICS FOR SORTING:
ampcputime: Total CPU seconds (primary optimization target)
logicalio: Total logical I/O operations (scan indicator)
cpuskw: CPU skew ratio (distribution problems)
ioskw: I/O skew ratio (hot spot indicators)
pji: Physical-to-Logical I/O ratio (compute intensity)
uii: Unit I/O Intensity (I/O efficiency)
numsteps: Query execution plan steps (complexity)
response_secs: Wall-clock execution time (user impact)
delaytime: Time spent in queue (concurrency issues)
AUTOMATIC PERFORMANCE CATEGORIZATION: Each query is categorized using configurable thresholds (from sql_opt_config.yml):
CPU Categories: VERY_HIGH_CPU (>config.very_high), HIGH_CPU (>config.high), MEDIUM_CPU (>10s), LOW_CPU
CPU Skew: SEVERE_CPU_SKEW (>config.severe), HIGH_CPU_SKEW (>config.high), MODERATE_CPU_SKEW (>config.moderate), NORMAL
I/O Skew: SEVERE_IO_SKEW (>config.severe), HIGH_IO_SKEW (>config.high), MODERATE_IO_SKEW (>config.moderate), NORMAL
Use thresholds set in config file for, CPU - high, very_high, Skew moderate, high, severe
TYPICAL OPTIMIZATION WORKFLOW:
Start with clusters identified from sql_Analyze_Cluster_Stats
Retrieve top queries by impact metric (usually 'ampcputime')
Analyze SQL patterns for common issues:
Missing WHERE clauses or inefficient predicates
Cartesian products or missing JOIN conditions
Inefficient GROUP BY or ORDER BY operations
Suboptimal table access patterns
Missing or outdated statistics
Develop specific optimization recommendations
QUERY LIMIT STRATEGY:
Use the query limit set in config file for pattern recognition and analysis, unless user specifies a different limit
OUTPUT INCLUDES:
Complete SQL query text for each query
All performance metrics, user, application, and workload context, cluster membership and rankings
Performance categories for quick filtering
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| cluster_ids | Yes | ||
| limit_per_cluster | No | ||
| metric | No | ampcputime |
Implementation Reference
- Main handler function executing the tool: connects to Teradata, builds dynamic SQL to retrieve top queries from specified clusters sorted by performance metric, adds rankings and categories, formats as JSON with metadata.def handle_sql_Retrieve_Cluster_Queries( conn, cluster_ids: List[int], metric: str = "ampcputime", limit_per_cluster: int = 250, *args, **kwargs ): """ **RETRIEVE ACTUAL SQL QUERIES FROM SPECIFIC CLUSTERS FOR PATTERN ANALYSIS** This tool extracts the actual SQL query text and performance metrics from selected clusters, enabling detailed pattern analysis and specific optimization recommendations. Essential for moving from cluster-level analysis to actual query optimization. **DETAILED ANALYSIS CAPABILITIES:** - **SQL Pattern Recognition**: Analyze actual query structures, joins, predicates, and functions - **Performance Correlation**: Connect query patterns to specific performance characteristics - **Optimization Identification**: Identify common anti-patterns, missing indexes, inefficient joins - **Code Quality Assessment**: Evaluate query construction, complexity, and best practices - **Workload Understanding**: See actual business logic and data access patterns **QUERY SELECTION STRATEGIES:** - **By CPU Impact**: Sort by 'ampcputime' to focus on highest CPU consumers - **By I/O Volume**: Sort by 'logicalio' to find scan-intensive queries - **By Skew Problems**: Sort by 'cpuskw' or 'ioskw' for distribution issues - **By Complexity**: Sort by 'numsteps' for complex execution plans - **By Response Time**: Sort by 'response_secs' for user experience impact **AVAILABLE METRICS FOR SORTING:** - **ampcputime**: Total CPU seconds (primary optimization target) - **logicalio**: Total logical I/O operations (scan indicator) - **cpuskw**: CPU skew ratio (distribution problems) - **ioskw**: I/O skew ratio (hot spot indicators) - **pji**: Physical-to-Logical I/O ratio (compute intensity) - **uii**: Unit I/O Intensity (I/O efficiency) - **numsteps**: Query execution plan steps (complexity) - **response_secs**: Wall-clock execution time (user impact) - **delaytime**: Time spent in queue (concurrency issues) **AUTOMATIC PERFORMANCE CATEGORIZATION:** Each query is categorized using configurable thresholds (from sql_opt_config.yml): - **CPU Categories**: VERY_HIGH_CPU (>config.very_high), HIGH_CPU (>config.high), MEDIUM_CPU (>10s), LOW_CPU - **CPU Skew**: SEVERE_CPU_SKEW (>config.severe), HIGH_CPU_SKEW (>config.high), MODERATE_CPU_SKEW (>config.moderate), NORMAL - **I/O Skew**: SEVERE_IO_SKEW (>config.severe), HIGH_IO_SKEW (>config.high), MODERATE_IO_SKEW (>config.moderate), NORMAL Use thresholds set in config file for, CPU - high, very_high, Skew moderate, high, severe **TYPICAL OPTIMIZATION WORKFLOW:** 1. Start with clusters identified from sql_Analyze_Cluster_Stats 2. Retrieve top queries by impact metric (usually 'ampcputime') 3. Analyze SQL patterns for common issues: - Missing WHERE clauses or inefficient predicates - Cartesian products or missing JOIN conditions - Inefficient GROUP BY or ORDER BY operations - Suboptimal table access patterns - Missing or outdated statistics 4. Develop specific optimization recommendations **QUERY LIMIT STRATEGY:** - Use the query limit set in config file for pattern recognition and analysis, unless user specifies a different limit **OUTPUT INCLUDES:** - Complete SQL query text for each query - All performance metrics, user, application, and workload context, cluster membership and rankings - Performance categories for quick filtering """ config = SQL_CLUSTERING_CONFIG logger.debug(f"handle_sql_Retrieve_Cluster_Queries: clusters={cluster_ids}, metric={metric}, limit={limit_per_cluster}") feature_db = config['databases']['feature_db'] clusters_table = config['tables']['sql_query_clusters'] # Validate metric valid_metrics = [ 'ampcputime', 'logicalio', 'cpuskw', 'ioskw', 'pji', 'uii', 'numsteps', 'response_secs', 'delaytime' ] if metric not in valid_metrics: metric = 'ampcputime' # Default fallback # Convert cluster_ids list to comma-separated string for SQL IN clause cluster_ids_str = ','.join(map(str, cluster_ids)) with conn.cursor() as cur: # Get thresholds from config thresholds = config.get('performance_thresholds', {}) cpu_high = thresholds.get('cpu', {}).get('high', 100) cpu_very_high = thresholds.get('cpu', {}).get('very_high', 1000) skew_moderate = thresholds.get('skew', {}).get('moderate', 2.0) skew_high = thresholds.get('skew', {}).get('high', 3.0) skew_severe = thresholds.get('skew', {}).get('severe', 5.0) retrieve_queries_sql = f""" SELECT td_clusterid_kmeans, id, txt, username, appid, numsteps, ampcputime, logicalio, wdname, cpuskw, ioskw, pji, uii, response_secs, response_mins, delaytime, silhouette_score, -- Ranking within cluster by selected metric ROW_NUMBER() OVER (PARTITION BY td_clusterid_kmeans ORDER BY {metric} DESC) AS rank_in_cluster, -- Overall ranking across all selected clusters ROW_NUMBER() OVER (ORDER BY {metric} DESC) AS overall_rank, -- Performance categorization with configurable thresholds CASE WHEN ampcputime > {cpu_very_high} THEN 'VERY_HIGH_CPU' WHEN ampcputime > {cpu_high} THEN 'HIGH_CPU' WHEN ampcputime > 10 THEN 'MEDIUM_CPU' ELSE 'LOW_CPU' END AS cpu_category, CASE WHEN cpuskw > {skew_severe} THEN 'SEVERE_CPU_SKEW' WHEN cpuskw > {skew_high} THEN 'HIGH_CPU_SKEW' WHEN cpuskw > {skew_moderate} THEN 'MODERATE_CPU_SKEW' ELSE 'NORMAL_CPU_SKEW' END AS cpu_skew_category, CASE WHEN ioskw > {skew_severe} THEN 'SEVERE_IO_SKEW' WHEN ioskw > {skew_high} THEN 'HIGH_IO_SKEW' WHEN ioskw > {skew_moderate} THEN 'MODERATE_IO_SKEW' ELSE 'NORMAL_IO_SKEW' END AS io_skew_category FROM {feature_db}.{clusters_table} WHERE td_clusterid_kmeans IN ({cluster_ids_str}) QUALIFY ROW_NUMBER() OVER (PARTITION BY td_clusterid_kmeans ORDER BY {metric} DESC) <= {limit_per_cluster} ORDER BY td_clusterid_kmeans, {metric} DESC """ cur.execute(retrieve_queries_sql) data = rows_to_json(cur.description, cur.fetchall()) # Get summary by cluster cur.execute(f""" SELECT td_clusterid_kmeans, COUNT(*) AS queries_retrieved, AVG({metric}) AS avg_metric_value, MAX({metric}) AS max_metric_value, MIN({metric}) AS min_metric_value FROM {feature_db}.{clusters_table} WHERE td_clusterid_kmeans IN ({cluster_ids_str}) GROUP BY td_clusterid_kmeans ORDER BY td_clusterid_kmeans """) cluster_summary = rows_to_json(cur.description, cur.fetchall()) logger.debug(f"Retrieved {len(data)} queries from {len(cluster_ids)} clusters") # Return results with metadata metadata = { "tool_name": "sql_Retrieve_Cluster_Queries", "retrieval_parameters": { "cluster_ids": cluster_ids, "sort_metric": metric, "limit_per_cluster": limit_per_cluster, "valid_metrics": valid_metrics }, "cluster_summary": cluster_summary, "queries_retrieved": len(data), "table_source": f"{feature_db}.{clusters_table}", "analysis_ready": True, "description": f"Retrieved top {limit_per_cluster} queries per cluster sorted by {metric} - ready for pattern analysis and optimization recommendations" } return create_response(data, metadata)
- src/teradata_mcp_server/app.py:270-282 (registration)Dynamic registration code that discovers handle_* functions from loaded tool modules (including sql_opt_tools.py), derives tool name by stripping 'handle_', wraps with DB connection injection and QueryBand, and registers as MCP tool using function signature for schema and docstring for description.module_loader = td.initialize_module_loader(config) if module_loader: all_functions = module_loader.get_all_functions() for name, func in all_functions.items(): if not (inspect.isfunction(func) and name.startswith("handle_")): continue tool_name = name[len("handle_"):] if not any(re.match(p, tool_name) for p in config.get('tool', [])): continue wrapped = make_tool_wrapper(func) mcp.tool(name=tool_name, description=wrapped.__doc__)(wrapped) logger.info(f"Created tool: {tool_name}") else: