Skip to main content
Glama

execute_jmeter_test

Run JMeter performance tests by specifying the test file (.jmx), GUI mode, and custom properties. Simplify test execution and management through structured inputs.

Instructions

Execute a JMeter test.

Args: test_file: Path to the JMeter test file (.jmx) gui_mode: Whether to run in GUI mode (default: False) properties: Dictionary of JMeter properties to pass with -J (default: None)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
gui_modeNo
propertiesNo
test_fileYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The @mcp.tool() decorator registers the 'execute_jmeter_test' function as an MCP tool.
    @mcp.tool()
  • The handler function implementing the tool logic. It accepts test_file, gui_mode, and properties parameters and delegates execution to the run_jmeter helper function.
    async def execute_jmeter_test(test_file: str, gui_mode: bool = False, properties: dict = None) -> str:
        """Execute a JMeter test.
    
        Args:
            test_file: Path to the JMeter test file (.jmx)
            gui_mode: Whether to run in GUI mode (default: False)
            properties: Dictionary of JMeter properties to pass with -J (default: None)
        """
        return await run_jmeter(test_file, non_gui=not gui_mode, properties=properties)  # Run in non-GUI mode by default
  • The run_jmeter helper function contains the core logic for executing JMeter tests using subprocess, handling both GUI and non-GUI modes, properties, report generation, and error handling.
    async def run_jmeter(test_file: str, non_gui: bool = True, properties: dict = None, generate_report: bool = False, report_output_dir: str = None, log_file: str = None) -> str:
        """Run a JMeter test.
    
        Args:
            test_file: Path to the JMeter test file (.jmx)
            non_gui: Run in non-GUI mode (default: True)
            properties: Dictionary of JMeter properties to pass with -J (default: None)
            generate_report: Whether to generate report dashboard after load test (default: False)
            report_output_dir: Output folder for report dashboard (default: None)
            log_file: Name of JTL file to log sample results to (default: None)
    
        Returns:
            str: JMeter execution output
        """
        try:
            # Convert to absolute path
            test_file_path = Path(test_file).resolve()
            
            # Validate file exists and is a .jmx file
            if not test_file_path.exists():
                return f"Error: Test file not found: {test_file}"
            if not test_file_path.suffix == '.jmx':
                return f"Error: Invalid file type. Expected .jmx file: {test_file}"
    
            # Get JMeter binary path from environment
            jmeter_bin = os.getenv('JMETER_BIN', 'jmeter')
            java_opts = os.getenv('JMETER_JAVA_OPTS', '')
    
            # Log the JMeter binary path and Java options
            logger.info(f"JMeter binary path: {jmeter_bin}")
            logger.debug(f"Java options: {java_opts}")
    
            # Build command
            cmd = [str(Path(jmeter_bin).resolve())]
            
            if non_gui:
                cmd.extend(['-n'])
            cmd.extend(['-t', str(test_file_path)])
            
            # Add JMeter properties if provided∑
            if properties:
                for prop_name, prop_value in properties.items():
                    cmd.extend([f'-J{prop_name}={prop_value}'])
                    logger.debug(f"Adding property: -J{prop_name}={prop_value}")
            
            # Add report generation options if requested
            if generate_report and non_gui:
                if log_file is None:
                    # Generate unique log file name if not specified
                    unique_id = generate_unique_id()
                    log_file = f"{test_file_path.stem}_{unique_id}_results.jtl"
                    logger.debug(f"Using generated unique log file: {log_file}")
                
                cmd.extend(['-l', log_file])
                cmd.extend(['-e'])
                
                # Always ensure report_output_dir is unique
                unique_id = unique_id if 'unique_id' in locals() else generate_unique_id()
                
                if report_output_dir:
                    # Append unique identifier to user-provided report directory
                    original_dir = report_output_dir
                    report_output_dir = f"{original_dir}_{unique_id}"
                    logger.debug(f"Making user-provided report directory unique: {original_dir} -> {report_output_dir}")
                else:
                    # Generate unique report output directory if not specified
                    report_output_dir = f"{test_file_path.stem}_{unique_id}_report"
                    logger.debug(f"Using generated unique report output directory: {report_output_dir}")
                    
                cmd.extend(['-o', report_output_dir])
    
            # Log the full command for debugging
            logger.debug(f"Executing command: {' '.join(cmd)}")
            
            if non_gui:
                # For non-GUI mode, capture output
                result = subprocess.run(cmd, capture_output=True, text=True)
                
                # Log output for debugging
                logger.debug("Command output:")
                logger.debug(f"Return code: {result.returncode}")
                logger.debug(f"Stdout: {result.stdout}")
                logger.debug(f"Stderr: {result.stderr}")
    
                if result.returncode != 0:
                    return f"Error executing JMeter test:\n{result.stderr}"
                
                return result.stdout
            else:
                # For GUI mode, start process without capturing output
                subprocess.Popen(cmd)
                return "JMeter GUI launched successfully"
    
        except Exception as e:
            return f"Unexpected error: {str(e)}"
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It mentions GUI mode and properties passing, but doesn't cover critical aspects like execution environment requirements, error handling, output format, or whether this is a blocking/long-running operation. For a test execution tool with zero annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured: a clear purpose statement followed by well-organized parameter documentation. Every sentence earns its place, and the information is front-loaded with the most important details first.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has an output schema (which handles return values), 3 parameters with 0% schema coverage, and no annotations, the description does an adequate job documenting parameters but lacks behavioral context. For a test execution tool, it should mention what happens during execution and what kind of results to expect.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description provides clear parameter documentation in the Args section, explaining what each parameter does. With 0% schema description coverage, this description fully compensates by documenting all 3 parameters with their purposes and defaults, adding significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Execute' and the resource 'JMeter test', making the purpose immediately understandable. However, it doesn't differentiate from sibling tools like 'execute_jmeter_test_non_gui', which appears to be a more specific version of this tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like 'execute_jmeter_test_non_gui' or other sibling tools. There's no mention of prerequisites, typical use cases, or when to choose GUI vs non-GUI mode.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/QAInsights/jmeter-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server