Skip to main content
Glama

generate_visualization

Create visualizations of JMeter test results from JTL files. Choose from time series, distribution, comparison, or HTML report formats. Save outputs for performance analysis and reporting.

Instructions

Generate visualizations of JMeter test results.

Args: jtl_file: Path to the JTL file containing test results visualization_type: Type of visualization to generate (time_series, distribution, comparison, html_report) output_file: Path to save the visualization

Returns: str: Path to the generated visualization file

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
jtl_fileYes
output_fileYes
visualization_typeYes

Implementation Reference

  • The main handler function for the 'generate_visualization' MCP tool. It is decorated with @mcp.tool() for registration and implements the logic to generate different types of visualizations (time_series, distribution, comparison, html_report) from JMeter JTL files using TestResultsAnalyzer and VisualizationEngine.
    async def generate_visualization(jtl_file: str, visualization_type: str, output_file: str) -> str: """Generate visualizations of JMeter test results. Args: jtl_file: Path to the JTL file containing test results visualization_type: Type of visualization to generate (time_series, distribution, comparison, html_report) output_file: Path to save the visualization Returns: str: Path to the generated visualization file """ try: analyzer = TestResultsAnalyzer() # Validate file exists file_path = Path(jtl_file) if not file_path.exists(): return f"Error: JTL file not found: {jtl_file}" try: # Analyze the file with detailed analysis analysis_results = analyzer.analyze_file(file_path, detailed=True) # Create visualization engine output_dir = os.path.dirname(output_file) if output_file else None engine = VisualizationEngine(output_dir=output_dir) # Generate visualization based on type if visualization_type == "time_series": # Extract time series metrics time_series = analysis_results.get("detailed", {}).get("time_series", []) if not time_series: return "No time series data available for visualization." # Convert to TimeSeriesMetrics objects metrics = [] for ts_data in time_series: metrics.append(TimeSeriesMetrics( timestamp=datetime.datetime.fromisoformat(ts_data["timestamp"]), active_threads=ts_data["active_threads"], throughput=ts_data["throughput"], average_response_time=ts_data["average_response_time"], error_rate=ts_data["error_rate"] )) # Create visualization output_path = engine.create_time_series_graph( metrics, metric_name="average_response_time", output_file=output_file) return f"Time series graph generated: {output_path}" elif visualization_type == "distribution": # Extract response times samples = [] for endpoint, metrics in analysis_results.get("detailed", {}).get("endpoints", {}).items(): samples.extend([metrics["average_response_time"]] * metrics["total_samples"]) if not samples: return "No response time data available for visualization." # Create visualization output_path = engine.create_distribution_graph(samples, output_file=output_file) return f"Distribution graph generated: {output_path}" elif visualization_type == "comparison": # Extract endpoint metrics endpoints = analysis_results.get("detailed", {}).get("endpoints", {}) if not endpoints: return "No endpoint data available for visualization." # Convert to EndpointMetrics objects endpoint_metrics = {} for endpoint, metrics_data in endpoints.items(): endpoint_metrics[endpoint] = EndpointMetrics( endpoint=endpoint, total_samples=metrics_data["total_samples"], error_count=metrics_data["error_count"], error_rate=metrics_data["error_rate"], average_response_time=metrics_data["average_response_time"], median_response_time=metrics_data["median_response_time"], percentile_90=metrics_data["percentile_90"], percentile_95=metrics_data["percentile_95"], percentile_99=metrics_data["percentile_99"], min_response_time=metrics_data["min_response_time"], max_response_time=metrics_data["max_response_time"], throughput=metrics_data["throughput"], test_duration=analysis_results["summary"]["duration"] ) # Create visualization output_path = engine.create_endpoint_comparison_chart( endpoint_metrics, metric_name="average_response_time", output_file=output_file) return f"Endpoint comparison chart generated: {output_path}" elif visualization_type == "html_report": # Create HTML report output_path = engine.create_html_report(analysis_results, output_file) return f"HTML report generated: {output_path}" else: return f"Unknown visualization type: {visualization_type}. " \ f"Supported types: time_series, distribution, comparison, html_report" except ValueError as e: return f"Error generating visualization: {str(e)}" except Exception as e: return f"Error generating visualization: {str(e)}"

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/QAInsights/jmeter-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server