Skip to main content
Glama

run_tests

Run Jest tests for a service by specifying its relative path from repo root. Returns pass/fail counts and structured failure details with test names and error messages. Optionally filter tests with a name pattern.

Instructions

Run Jest tests for a service. Returns pass/fail counts and structured failure details with test names and error messages. Optionally scope to a test name pattern.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
servicePathYesRelative path from repo root to the service to test. E.g. "ops/control-panel/server".
patternNoOptional Jest test name pattern to run a subset of tests.

Implementation Reference

  • Core implementation of runTests: resolves service path, invokes npm test with --json output, parses Jest JSON results, and returns structured pass/fail counts with failure details.
    async function runTests(servicePath: string, pattern?: string): Promise<TestResult> {
      const absPath = resolveSafe(repoDir, servicePath);
      const outputFile = join(tmpdir(), `mcp-dev-jest-${Date.now()}.json`);
    
      const npmArgs = [
        'test',
        '--',
        '--forceExit',
        '--json',
        `--outputFile=${outputFile}`,
      ];
      if (pattern) npmArgs.push(`--testNamePattern=${pattern}`);
    
      await proc.run('npm', npmArgs, {
        cwd: absPath,
        timeout: 120_000,
        env: { CI: 'true' },
      });
    
      let jsonStr: string;
      try {
        jsonStr = await readFile(outputFile, 'utf8');
      } catch {
        return {
          success: false,
          passed: 0,
          failed: 0,
          skipped: 0,
          failures: [
            {
              suite: 'runner',
              test: 'startup',
              messages: ['Jest did not produce output. Check npm test script exists.'],
            },
          ],
        };
      } finally {
        await unlink(outputFile).catch(() => {});
      }
    
      let json: JestJsonOutput;
      try {
        json = JSON.parse(jsonStr) as JestJsonOutput;
      } catch {
        return {
          success: false,
          passed: 0,
          failed: 0,
          skipped: 0,
          failures: [{ suite: 'runner', test: 'parse', messages: ['Failed to parse Jest JSON output'] }],
        };
      }
    
      const failures: TestFailure[] = [];
      for (const suite of json.testResults) {
        const suiteName = relative(repoDir, suite.testFilePath);
        for (const t of suite.assertionResults) {
          if (t.status === 'failed') {
            failures.push({
              suite: suiteName,
              test: t.fullName,
              messages: t.failureMessages.map((m) => m.slice(0, 1_500)),
            });
          }
        }
      }
    
      return {
        success: json.numFailedTests === 0,
        passed: json.numPassedTests,
        failed: json.numFailedTests,
        skipped: json.numPendingTests,
        failures,
      };
    }
  • Zod schema RunTestsSchema defining input parameters: servicePath (required string) and pattern (optional string for Jest test name pattern).
    export const RunTestsSchema = z.object({
      servicePath: z
        .string()
        .describe(
          'Relative path from repo root to the service to test. E.g. "ops/control-panel/server".',
        ),
      pattern: z
        .string()
        .optional()
        .describe('Optional Jest test name pattern to run a subset of tests.'),
    });
  • Registration of 'run_tests' tool on the MCP server with schema and handler that delegates to manager.runTests().
    server.tool(
      'run_tests',
      'Run Jest tests for a service. Returns pass/fail counts and structured failure details with test names and error messages. Optionally scope to a test name pattern.',
      RunTestsSchema.shape,
      async (args) => {
        const result = await manager.runTests(args.servicePath, args.pattern);
        return {
          content: [{ type: 'text' as const, text: JSON.stringify(result, null, 2) }],
        };
      },
    );
  • JestJsonOutput interface used to parse the Jest --json output file for test results.
    interface JestJsonOutput {
      numPassedTests: number;
      numFailedTests: number;
      numPendingTests: number;
      testResults: Array<{
        testFilePath: string;
        assertionResults: Array<{
          status: string;
          fullName: string;
          failureMessages: string[];
        }>;
      }>;
    }
  • TestResult and TestFailure interfaces defining the return type structure for runTests.
    export interface TestResult {
      success: boolean;
      passed: number;
      failed: number;
      skipped: number;
      failures: TestFailure[];
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description must disclose behavior. It describes the return values (pass/fail counts, failure details) but does not mention side effects, permissions, or environmental impact (e.g., is the test isolated? Does it require a service to be running?).

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Two concise sentences: first covers purpose and return, second covers parameter usage. No wasted words or redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given simple parameters, no output schema, and no annotations, the description adequately covers what the tool does, what it returns, and how to use the optional parameter. Slightly incomplete as it omits error conditions or prerequisites, but sufficient for the tool's simplicity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100%, so baseline is 3. The description restates that the pattern parameter is optional ('Optionally scope to a test name pattern'), adding no new meaning beyond the schema descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly specifies the action ('Run Jest tests for a service'), the resource ('Jest tests'), and the context ('for a service'). It distinguishes from siblings like run_typecheck by naming the test framework and purpose.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage is for running tests, but it does not explicitly state when to use this tool versus alternatives like run_typecheck. It mentions optional scoping with a pattern but lacks exclusion guidance.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/2loch-ness6/mempalace-mcp-dev'

If you have feedback or need assistance with the MCP directory API, please join our Discord server