Skip to main content
Glama

run_tests

Execute tests using frameworks like Bats, Pytest, Flutter, Jest, and Go. Specify command, working directory, output location, timeout, and security options to generate and store results programmatically.

Instructions

Run tests and capture output

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
commandYesTest command to execute (e.g., "bats tests/*.bats")
envNoEnvironment variables for test execution
frameworkYesTesting framework being used
outputDirNoDirectory to store test results
securityOptionsNoSecurity options for command execution
timeoutNoTest execution timeout in milliseconds (default: 300000)
workingDirYesWorking directory for test execution

Implementation Reference

  • Main handler for the 'run_tests' tool via CallToolRequestSchema. Validates input, orchestrates test execution, result parsing, reporting, and response formatting.
    this.server.setRequestHandler(CallToolRequestSchema, async (request: Request) => {
      if (!request.params?.name) {
        throw new Error('Missing tool name');
      }
    
      if (request.params.name !== 'run_tests') {
        throw new Error(`Unknown tool: ${request.params.name}`);
      }
    
      if (!request.params.arguments) {
        throw new Error('Missing tool arguments');
      }
    
      const args = request.params.arguments as unknown as TestRunArguments;
      if (!this.isValidTestRunArguments(args)) {
        throw new Error('Invalid test run arguments');
      }
    
      const { command, workingDir, framework, outputDir = 'test_reports', timeout = DEFAULT_TIMEOUT, env, securityOptions } = args;
    
      // Validate command against security rules
      if (framework === 'generic') {
        const validation = validateCommand(command, securityOptions);
        if (!validation.isValid) {
          throw new Error(`Command validation failed: ${validation.reason}`);
        }
      }
    
      debug('Running tests with args:', { command, workingDir, framework, outputDir, timeout, env });
    
      // Create output directory
      const resultDir = join(workingDir, outputDir);
      await mkdir(resultDir, { recursive: true });
    
      try {
        // Run tests with timeout
        const { stdout, stderr } = await this.executeTestCommand(command, workingDir, framework, resultDir, timeout, env, securityOptions);
    
        // Save raw output
        await writeFile(join(resultDir, 'test_output.log'), stdout);
        if (stderr) {
          await writeFile(join(resultDir, 'test_errors.log'), stderr);
        }
    
        // Parse the test results using the appropriate parser
        try {
          const results = this.parseTestResults(framework, stdout, stderr);
          // Write parsed results to file
          await writeFile(join(resultDir, 'test_results.json'), JSON.stringify(results, null, 2));
          
          // Create a summary file
          const summaryContent = this.generateSummary(results);
          await writeFile(join(resultDir, 'summary.txt'), summaryContent);
        } catch (parseError) {
          debug('Error parsing test results:', parseError);
          // Still continue even if parsing fails
        }
    
        return {
          content: [
            {
              type: 'text',
              text: stdout + (stderr ? '\n' + stderr : ''),
            },
          ],
          isError: stdout.includes('failed') || stdout.includes('[E]') || stderr.length > 0,
        };
      } catch (error) {
        const errorMessage = error instanceof Error ? error.message : 'Unknown error occurred';
        debug('Test execution failed:', errorMessage);
        throw new Error(`Test execution failed: ${errorMessage}`);
      }
    });
  • Input schema definition for the 'run_tests' tool, specifying required parameters and types for command, working directory, framework, and optional fields.
    inputSchema: {
      type: 'object',
      properties: {
        command: {
          type: 'string',
          description: 'Test command to execute (e.g., "bats tests/*.bats")',
        },
        workingDir: {
          type: 'string',
          description: 'Working directory for test execution',
        },
        framework: {
          type: 'string',
          enum: ['bats', 'pytest', 'flutter', 'jest', 'go', 'rust', 'generic'],
          description: 'Testing framework being used',
        },
        outputDir: {
          type: 'string',
          description: 'Directory to store test results',
        },
        timeout: {
          type: 'number',
          description: 'Test execution timeout in milliseconds (default: 300000)',
        },
        env: {
          type: 'object',
          description: 'Environment variables for test execution',
          additionalProperties: {
            type: 'string'
          }
        },
        securityOptions: {
          type: 'object',
          description: 'Security options for command execution',
          properties: {
            allowSudo: {
              type: 'boolean',
              description: 'Allow sudo commands (default: false)'
            },
            allowSu: {
              type: 'boolean',
              description: 'Allow su commands (default: false)'
            },
            allowShellExpansion: {
              type: 'boolean',
              description: 'Allow shell expansion like $() or backticks (default: true)'
            },
            allowPipeToFile: {
              type: 'boolean',
              description: 'Allow pipe to file operations (default: false)'
            }
          }
        }
      },
      required: ['command', 'workingDir', 'framework'],
    },
  • src/index.ts:42-102 (registration)
    Registration of the 'run_tests' tool in the MCP server capabilities.
      run_tests: {
        name: 'run_tests',
        description: 'Run tests and capture output',
        inputSchema: {
          type: 'object',
          properties: {
            command: {
              type: 'string',
              description: 'Test command to execute (e.g., "bats tests/*.bats")',
            },
            workingDir: {
              type: 'string',
              description: 'Working directory for test execution',
            },
            framework: {
              type: 'string',
              enum: ['bats', 'pytest', 'flutter', 'jest', 'go', 'rust', 'generic'],
              description: 'Testing framework being used',
            },
            outputDir: {
              type: 'string',
              description: 'Directory to store test results',
            },
            timeout: {
              type: 'number',
              description: 'Test execution timeout in milliseconds (default: 300000)',
            },
            env: {
              type: 'object',
              description: 'Environment variables for test execution',
              additionalProperties: {
                type: 'string'
              }
            },
            securityOptions: {
              type: 'object',
              description: 'Security options for command execution',
              properties: {
                allowSudo: {
                  type: 'boolean',
                  description: 'Allow sudo commands (default: false)'
                },
                allowSu: {
                  type: 'boolean',
                  description: 'Allow su commands (default: false)'
                },
                allowShellExpansion: {
                  type: 'boolean',
                  description: 'Allow shell expansion like $() or backticks (default: true)'
                },
                allowPipeToFile: {
                  type: 'boolean',
                  description: 'Allow pipe to file operations (default: false)'
                }
              }
            }
          },
          required: ['command', 'workingDir', 'framework'],
        },
      },
    },
  • Helper function that executes the test command using node:child_process.spawn, manages timeout, environment sanitization, stdout/stderr capture, and framework-specific configurations.
    private async executeTestCommand(
      command: string,
      workingDir: string,
      framework: Framework,
      resultDir: string,
      timeout: number,
      env?: Record<string, string>,
      securityOptions?: Partial<SecurityOptions>
    ): Promise<{ stdout: string; stderr: string }> {
      return new Promise((resolve, reject) => {
        const timer = setTimeout(() => {
          reject(new Error('Test execution timed out'));
        }, timeout);
    
        // Split command into executable and args
        const parts = command.split(' ');
        const cmd = parts[0];
        const cmdArgs = parts.slice(1);
    
        debug('Executing command:', { cmd, cmdArgs, workingDir });
    
        // Sanitize environment variables for security
        const safeEnv = sanitizeEnvironmentVariables(env);
    
        const spawnOptions: SpawnOptions = {
          cwd: workingDir,
          env: { ...process.env, ...safeEnv },
          shell: true,
        };
    
        // Add framework-specific environment if needed
        if (framework === 'flutter') {
          spawnOptions.env = {
            ...spawnOptions.env,
            ...this.getFlutterEnv()
          };
          
          try {
            this.verifyFlutterInstallation(spawnOptions);
          } catch (error) {
            clearTimeout(timer);
            reject(error);
            return;
          }
        } else if (framework === 'rust') {
          // Ensure RUST_BACKTRACE is set for better error reporting
          spawnOptions.env = {
            ...spawnOptions.env,
            RUST_BACKTRACE: '1'
          };
        }
    
        const childProcess = spawn(cmd, cmdArgs, spawnOptions);
    
        let stdout = '';
        let stderr = '';
    
        childProcess.stdout?.on('data', (data: Buffer) => {
          const chunk = data.toString();
          stdout += chunk;
          debug('stdout chunk:', chunk);
        });
    
        childProcess.stderr?.on('data', (data: Buffer) => {
          const chunk = data.toString();
          stderr += chunk;
          debug('stderr chunk:', chunk);
        });
    
        childProcess.on('error', (error: Error) => {
          debug('Process error:', error);
          clearTimeout(timer);
          reject(error);
        });
    
        childProcess.on('close', async (code: number | null) => {
          clearTimeout(timer);
          debug('Process closed with code:', code);
          resolve({ stdout, stderr });
        });
      });
    }
  • Helper method to parse test results using the framework-specific parser from parsers/index.js.
    parseTestResults(framework: Framework, stdout: string, stderr: string): ParsedResults {
      return TestParserFactory.parseTestResults(framework, stdout, stderr);
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden but offers minimal behavioral insight. It mentions 'capture output' but doesn't describe output format, error handling, side effects, or security implications. The description doesn't contradict annotations (none exist), but fails to disclose important behavioral traits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at just four words, front-loaded with the core action and outcome. Every word earns its place with zero redundancy or unnecessary elaboration.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a complex tool with 7 parameters, nested objects, no annotations, and no output schema, the description is insufficient. It doesn't explain return values, error conditions, security considerations, or typical usage patterns that would help an agent understand this execution tool's behavior.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, providing detailed parameter documentation. The description adds no additional parameter semantics beyond the schema's comprehensive coverage, so it meets the baseline of 3 for high schema coverage without compensating value.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Run tests and capture output' clearly states the action (run tests) and outcome (capture output), but lacks specificity about what types of tests or how they're executed. It doesn't distinguish from siblings (none exist), but remains somewhat vague about scope and implementation details.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives, prerequisites, or typical scenarios. With no sibling tools mentioned, differentiation isn't needed, but there's still no context about appropriate use cases or limitations.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/privsim/mcp-test-runner'

If you have feedback or need assistance with the MCP directory API, please join our Discord server