get_performance_insights
Analyze JMeter test results from JTL files to generate actionable performance insights and recommendations for optimization.
Instructions
Get insights and recommendations for improving performance based on JMeter test results.
Args: jtl_file: Path to the JTL file containing test results
Returns: str: Performance insights and recommendations in a formatted string
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| jtl_file | Yes |
Implementation Reference
- jmeter_server.py:402-472 (handler)The main handler function for the 'get_performance_insights' MCP tool. It is registered via the @mcp.tool() decorator. The function validates the JTL file, uses TestResultsAnalyzer to perform detailed analysis, extracts insights and recommendations, formats them into a readable string, and includes a test summary.@mcp.tool() async def get_performance_insights(jtl_file: str) -> str: """Get insights and recommendations for improving performance based on JMeter test results. Args: jtl_file: Path to the JTL file containing test results Returns: str: Performance insights and recommendations in a formatted string """ try: analyzer = TestResultsAnalyzer() # Validate file exists file_path = Path(jtl_file) if not file_path.exists(): return f"Error: JTL file not found: {jtl_file}" try: # Analyze the file with detailed analysis analysis_results = analyzer.analyze_file(file_path, detailed=True) # Format the results as a string result_str = f"Performance Insights for {jtl_file}:\n\n" # Add insights information detailed_info = analysis_results.get("detailed", {}) insights = detailed_info.get("insights", {}) if not insights: return f"No insights available for {jtl_file}." # Recommendations recommendations = insights.get("recommendations", []) if recommendations: result_str += "Recommendations:\n" for i, rec in enumerate(recommendations[:5], 1): # Show top 5 recommendations result_str += f"{i}. [{rec.get('priority_level', 'medium').upper()}] {rec.get('issue')}\n" result_str += f" - Recommendation: {rec.get('recommendation')}\n" result_str += f" - Expected Impact: {rec.get('expected_impact')}\n" result_str += f" - Implementation Difficulty: {rec.get('implementation_difficulty')}\n\n" else: result_str += "No specific recommendations available.\n\n" # Scaling insights scaling_insights = insights.get("scaling_insights", []) if scaling_insights: result_str += "Scaling Insights:\n" for i, insight in enumerate(scaling_insights, 1): result_str += f"{i}. {insight.get('topic')}\n" result_str += f" {insight.get('description')}\n\n" else: result_str += "No scaling insights available.\n\n" # Add summary metrics for context summary = analysis_results.get("summary", {}) result_str += "Test Summary:\n" result_str += f"- Total samples: {summary.get('total_samples', 'N/A')}\n" result_str += f"- Error rate: {summary.get('error_rate', 'N/A'):.2f}%\n" result_str += f"- Average response time: {summary.get('average_response_time', 'N/A'):.2f} ms\n" result_str += f"- 95th percentile: {summary.get('percentile_95', 'N/A'):.2f} ms\n" result_str += f"- Throughput: {summary.get('throughput', 'N/A'):.2f} requests/second\n" return result_str except ValueError as e: return f"Error analyzing JTL file: {str(e)}" except Exception as e: return f"Error getting performance insights: {str(e)}"
- jmeter_server.py:402-402 (registration)The @mcp.tool() decorator registers the get_performance_insights function as an MCP tool.@mcp.tool()