Skip to main content
Glama

run_tests

Execute tests using frameworks like Bats, Pytest, Flutter, Jest, and Go. Specify command, working directory, output location, timeout, and security options to generate and store results programmatically.

Instructions

Run tests and capture output

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
commandYesTest command to execute (e.g., "bats tests/*.bats")
envNoEnvironment variables for test execution
frameworkYesTesting framework being used
outputDirNoDirectory to store test results
securityOptionsNoSecurity options for command execution
timeoutNoTest execution timeout in milliseconds (default: 300000)
workingDirYesWorking directory for test execution

Implementation Reference

  • Main handler for the 'run_tests' tool via CallToolRequestSchema. Validates input, orchestrates test execution, result parsing, reporting, and response formatting.
    this.server.setRequestHandler(CallToolRequestSchema, async (request: Request) => { if (!request.params?.name) { throw new Error('Missing tool name'); } if (request.params.name !== 'run_tests') { throw new Error(`Unknown tool: ${request.params.name}`); } if (!request.params.arguments) { throw new Error('Missing tool arguments'); } const args = request.params.arguments as unknown as TestRunArguments; if (!this.isValidTestRunArguments(args)) { throw new Error('Invalid test run arguments'); } const { command, workingDir, framework, outputDir = 'test_reports', timeout = DEFAULT_TIMEOUT, env, securityOptions } = args; // Validate command against security rules if (framework === 'generic') { const validation = validateCommand(command, securityOptions); if (!validation.isValid) { throw new Error(`Command validation failed: ${validation.reason}`); } } debug('Running tests with args:', { command, workingDir, framework, outputDir, timeout, env }); // Create output directory const resultDir = join(workingDir, outputDir); await mkdir(resultDir, { recursive: true }); try { // Run tests with timeout const { stdout, stderr } = await this.executeTestCommand(command, workingDir, framework, resultDir, timeout, env, securityOptions); // Save raw output await writeFile(join(resultDir, 'test_output.log'), stdout); if (stderr) { await writeFile(join(resultDir, 'test_errors.log'), stderr); } // Parse the test results using the appropriate parser try { const results = this.parseTestResults(framework, stdout, stderr); // Write parsed results to file await writeFile(join(resultDir, 'test_results.json'), JSON.stringify(results, null, 2)); // Create a summary file const summaryContent = this.generateSummary(results); await writeFile(join(resultDir, 'summary.txt'), summaryContent); } catch (parseError) { debug('Error parsing test results:', parseError); // Still continue even if parsing fails } return { content: [ { type: 'text', text: stdout + (stderr ? '\n' + stderr : ''), }, ], isError: stdout.includes('failed') || stdout.includes('[E]') || stderr.length > 0, }; } catch (error) { const errorMessage = error instanceof Error ? error.message : 'Unknown error occurred'; debug('Test execution failed:', errorMessage); throw new Error(`Test execution failed: ${errorMessage}`); } });
  • Input schema definition for the 'run_tests' tool, specifying required parameters and types for command, working directory, framework, and optional fields.
    inputSchema: { type: 'object', properties: { command: { type: 'string', description: 'Test command to execute (e.g., "bats tests/*.bats")', }, workingDir: { type: 'string', description: 'Working directory for test execution', }, framework: { type: 'string', enum: ['bats', 'pytest', 'flutter', 'jest', 'go', 'rust', 'generic'], description: 'Testing framework being used', }, outputDir: { type: 'string', description: 'Directory to store test results', }, timeout: { type: 'number', description: 'Test execution timeout in milliseconds (default: 300000)', }, env: { type: 'object', description: 'Environment variables for test execution', additionalProperties: { type: 'string' } }, securityOptions: { type: 'object', description: 'Security options for command execution', properties: { allowSudo: { type: 'boolean', description: 'Allow sudo commands (default: false)' }, allowSu: { type: 'boolean', description: 'Allow su commands (default: false)' }, allowShellExpansion: { type: 'boolean', description: 'Allow shell expansion like $() or backticks (default: true)' }, allowPipeToFile: { type: 'boolean', description: 'Allow pipe to file operations (default: false)' } } } }, required: ['command', 'workingDir', 'framework'], },
  • src/index.ts:42-102 (registration)
    Registration of the 'run_tests' tool in the MCP server capabilities.
    run_tests: { name: 'run_tests', description: 'Run tests and capture output', inputSchema: { type: 'object', properties: { command: { type: 'string', description: 'Test command to execute (e.g., "bats tests/*.bats")', }, workingDir: { type: 'string', description: 'Working directory for test execution', }, framework: { type: 'string', enum: ['bats', 'pytest', 'flutter', 'jest', 'go', 'rust', 'generic'], description: 'Testing framework being used', }, outputDir: { type: 'string', description: 'Directory to store test results', }, timeout: { type: 'number', description: 'Test execution timeout in milliseconds (default: 300000)', }, env: { type: 'object', description: 'Environment variables for test execution', additionalProperties: { type: 'string' } }, securityOptions: { type: 'object', description: 'Security options for command execution', properties: { allowSudo: { type: 'boolean', description: 'Allow sudo commands (default: false)' }, allowSu: { type: 'boolean', description: 'Allow su commands (default: false)' }, allowShellExpansion: { type: 'boolean', description: 'Allow shell expansion like $() or backticks (default: true)' }, allowPipeToFile: { type: 'boolean', description: 'Allow pipe to file operations (default: false)' } } } }, required: ['command', 'workingDir', 'framework'], }, }, },
  • Helper function that executes the test command using node:child_process.spawn, manages timeout, environment sanitization, stdout/stderr capture, and framework-specific configurations.
    private async executeTestCommand( command: string, workingDir: string, framework: Framework, resultDir: string, timeout: number, env?: Record<string, string>, securityOptions?: Partial<SecurityOptions> ): Promise<{ stdout: string; stderr: string }> { return new Promise((resolve, reject) => { const timer = setTimeout(() => { reject(new Error('Test execution timed out')); }, timeout); // Split command into executable and args const parts = command.split(' '); const cmd = parts[0]; const cmdArgs = parts.slice(1); debug('Executing command:', { cmd, cmdArgs, workingDir }); // Sanitize environment variables for security const safeEnv = sanitizeEnvironmentVariables(env); const spawnOptions: SpawnOptions = { cwd: workingDir, env: { ...process.env, ...safeEnv }, shell: true, }; // Add framework-specific environment if needed if (framework === 'flutter') { spawnOptions.env = { ...spawnOptions.env, ...this.getFlutterEnv() }; try { this.verifyFlutterInstallation(spawnOptions); } catch (error) { clearTimeout(timer); reject(error); return; } } else if (framework === 'rust') { // Ensure RUST_BACKTRACE is set for better error reporting spawnOptions.env = { ...spawnOptions.env, RUST_BACKTRACE: '1' }; } const childProcess = spawn(cmd, cmdArgs, spawnOptions); let stdout = ''; let stderr = ''; childProcess.stdout?.on('data', (data: Buffer) => { const chunk = data.toString(); stdout += chunk; debug('stdout chunk:', chunk); }); childProcess.stderr?.on('data', (data: Buffer) => { const chunk = data.toString(); stderr += chunk; debug('stderr chunk:', chunk); }); childProcess.on('error', (error: Error) => { debug('Process error:', error); clearTimeout(timer); reject(error); }); childProcess.on('close', async (code: number | null) => { clearTimeout(timer); debug('Process closed with code:', code); resolve({ stdout, stderr }); }); }); }
  • Helper method to parse test results using the framework-specific parser from parsers/index.js.
    parseTestResults(framework: Framework, stdout: string, stderr: string): ParsedResults { return TestParserFactory.parseTestResults(framework, stdout, stderr); }

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/privsim/mcp-test-runner'

If you have feedback or need assistance with the MCP directory API, please join our Discord server