Skip to main content
Glama

identify_performance_bottlenecks

Analyze JMeter test results to detect and report performance bottlenecks, enabling users to optimize system efficiency. Input a JTL file to receive formatted bottleneck analysis.

Instructions

Identify performance bottlenecks in JMeter test results.

Args: jtl_file: Path to the JTL file containing test results

Returns: str: Bottleneck analysis results in a formatted string

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
jtl_fileYes

Implementation Reference

  • The handler function decorated with @mcp.tool() that implements the core logic for identifying performance bottlenecks from a JMeter JTL file using TestResultsAnalyzer.
    @mcp.tool()
    async def identify_performance_bottlenecks(jtl_file: str) -> str:
        """Identify performance bottlenecks in JMeter test results.
        
        Args:
            jtl_file: Path to the JTL file containing test results
            
        Returns:
            str: Bottleneck analysis results in a formatted string
        """
        try:
            analyzer = TestResultsAnalyzer()
            
            # Validate file exists
            file_path = Path(jtl_file)
            if not file_path.exists():
                return f"Error: JTL file not found: {jtl_file}"
            
            try:
                # Analyze the file with detailed analysis
                analysis_results = analyzer.analyze_file(file_path, detailed=True)
                
                # Format the results as a string
                result_str = f"Performance Bottleneck Analysis of {jtl_file}:\n\n"
                
                # Add bottleneck information
                detailed_info = analysis_results.get("detailed", {})
                bottlenecks = detailed_info.get("bottlenecks", {})
                
                if not bottlenecks:
                    return f"No bottlenecks identified in {jtl_file}."
                
                # Slow endpoints
                slow_endpoints = bottlenecks.get("slow_endpoints", [])
                if slow_endpoints:
                    result_str += "Slow Endpoints:\n"
                    for endpoint in slow_endpoints:
                        result_str += f"- {endpoint.get('endpoint')}: {endpoint.get('response_time'):.2f} ms "
                        result_str += f"(Severity: {endpoint.get('severity')})\n"
                    result_str += "\n"
                else:
                    result_str += "No slow endpoints identified.\n\n"
                
                # Error-prone endpoints
                error_endpoints = bottlenecks.get("error_prone_endpoints", [])
                if error_endpoints:
                    result_str += "Error-Prone Endpoints:\n"
                    for endpoint in error_endpoints:
                        result_str += f"- {endpoint.get('endpoint')}: {endpoint.get('error_rate'):.2f}% "
                        result_str += f"(Severity: {endpoint.get('severity')})\n"
                    result_str += "\n"
                else:
                    result_str += "No error-prone endpoints identified.\n\n"
                
                # Anomalies
                anomalies = bottlenecks.get("anomalies", [])
                if anomalies:
                    result_str += "Response Time Anomalies:\n"
                    for anomaly in anomalies:
                        result_str += f"- At {anomaly.get('timestamp')}: "
                        result_str += f"Expected {anomaly.get('expected_value'):.2f} ms, "
                        result_str += f"Got {anomaly.get('actual_value'):.2f} ms "
                        result_str += f"({anomaly.get('deviation_percentage'):.2f}% deviation)\n"
                    result_str += "\n"
                else:
                    result_str += "No response time anomalies detected.\n\n"
                
                # Concurrency impact
                concurrency = bottlenecks.get("concurrency_impact", {})
                if concurrency:
                    result_str += "Concurrency Impact:\n"
                    correlation = concurrency.get("correlation", 0)
                    result_str += f"- Correlation between threads and response time: {correlation:.2f}\n"
                    
                    if concurrency.get("has_degradation", False):
                        result_str += f"- Performance degradation detected at {concurrency.get('degradation_threshold')} threads\n"
                    else:
                        result_str += "- No significant performance degradation detected with increasing threads\n"
                    result_str += "\n"
                
                # Add recommendations
                insights = detailed_info.get("insights", {})
                recommendations = insights.get("recommendations", [])
                
                if recommendations:
                    result_str += "Recommendations:\n"
                    for rec in recommendations[:5]:  # Show top 5 recommendations
                        result_str += f"- [{rec.get('priority_level', 'medium').upper()}] {rec.get('recommendation')}\n"
                else:
                    result_str += "No specific recommendations available.\n"
                
                return result_str
                
            except ValueError as e:
                return f"Error analyzing JTL file: {str(e)}"
            
        except Exception as e:
            return f"Error identifying performance bottlenecks: {str(e)}"
  • The @mcp.tool() decorator registers this function as an MCP tool.
    @mcp.tool()
Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/QAInsights/jmeter-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server