Skip to main content
Glama

run_tests

Execute tests for the active Xcode project to verify code functionality and identify issues. Optionally specify a test plan to run specific test suites.

Instructions

Executes tests for the active Xcode project.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
testPlanNoOptional name of the test plan to run.

Implementation Reference

  • Main handler implementing the run_tests tool logic: runs xcodebuild tests with filtering options, captures and processes output, generates logs, test reports, and code coverage.
    private async runTests(
        projectPath: string,
        scheme: string,
        configuration: string = "Debug",
        testIdentifier?: string,
        skipTests?: string[],
        destination: string = "platform=iOS Simulator,name=iPhone 15 Pro"
    ): Promise<{ success: boolean; output: string; logPath: string }> {
        const timestamp = new Date().toISOString().replace(/[:.]/g, '-');
        const logPath = path.join(this.buildLogsDir, `test-${timestamp}.log`);
        const projectDir = path.dirname(projectPath);
        const reportsPath = path.join(projectDir, 'TestReports', `Reports-${timestamp}`);
        const xcresultPath = `${reportsPath}.xcresult`;
    
        try {
            await mkdir(path.join(projectDir, 'TestReports'), { recursive: true });
        } catch (error) {
            console.error(`Failed to prepare test reports directory: ${error}`);
        }
    
        let testFlags = 'test';
        if (testIdentifier) {
            testFlags += ` -only-testing:${testIdentifier}`;
        }
        if (skipTests?.length) {
            testFlags += ` ${skipTests.map(test => `-skip-testing:${test}`).join(' ')}`;
        }
    
        const command = `which xcodebuild && xcodebuild -project "${projectPath}" \
            -scheme "${scheme}" \
            -configuration "${configuration}" \
            -destination '${destination}' \
            -resultBundlePath "${xcresultPath}" \
            -enableCodeCoverage YES \
            -UseModernBuildSystem=YES \
            -json \
            clean ${testFlags} 2>&1 | tee ${logPath}`;
    
        try {
            const { stdout, stderr } = await execAsync(command, { maxBuffer: 100 * 1024 * 1024 });
    
            try {
                const jsonOutput = stdout.split('\n')
                    .filter(line => line.trim())
                    .map(line => {
                        try { return JSON.parse(line); }
                        catch (e) { return line; }
                    });
                await writeFile(logPath + '.json', JSON.stringify(jsonOutput, null, 2));
            } catch (parseError) {
                console.error('Failed to parse JSON output:', parseError);
            }
    
            // Process test results using xcresulttool
            if (fs.existsSync(xcresultPath)) {
                try {
                    // Get test summary
                    const summaryCmd = `xcrun xcresulttool get --format json --path "${xcresultPath}"`;
                    const { stdout: summaryOutput } = await execAsync(summaryCmd);
                    await writeFile(path.join(this.buildLogsDir, `test-summary-${timestamp}.json`), summaryOutput);
    
                    // Get code coverage if available
                    const coverageOutput = await execAsync(`xcrun xccov view --report "${xcresultPath}"`);
                    await writeFile(path.join(this.buildLogsDir, `coverage-${timestamp}.txt`), coverageOutput.stdout);
                } catch (resultsError) {
                    console.error('Failed to process test results:', resultsError);
                }
            }
    
            const success = !stdout.includes('** TEST FAILED **') && !stdout.includes('** BUILD FAILED **');
            return { success, output: stdout + stderr, logPath };
        } catch (error) {
            console.error('Test error:', error);
            if (error instanceof Error) {
                const execError = error as { stderr?: string };
                const errorOutput = error.message + (execError.stderr ? `\n${execError.stderr}` : '');
                await writeFile(logPath, errorOutput);
                return { success: false, output: errorOutput, logPath };
            }
            throw error;
        }
    }
  • src/index.ts:295-325 (registration)
    Registers the "run_tests" tool in the MCP ListToolsRequestSchema handler, defining its name, description, and JSON input schema.
        name: "run_tests",
        description: "Run Xcode project tests with optional filtering",
        inputSchema: {
            type: "object",
            properties: {
                projectPath: {
                    type: "string",
                    description: "Path to the .xcodeproj or .xcworkspace"
                },
                scheme: {
                    type: "string",
                    description: "Test scheme name"
                },
                testIdentifier: {
                    type: "string",
                    description: "Optional specific test to run (e.g., 'MyTests/testExample')"
                },
                skipTests: {
                    type: "array",
                    items: { type: "string" },
                    description: "Optional array of test identifiers to skip"
                },
                configuration: {
                    type: "string",
                    description: "Build configuration (e.g., Debug, Release)",
                    default: "Debug"
                }
            },
            required: ["projectPath", "scheme"]
        }
    }]
  • TypeScript interface and type guard for validating run_tests tool input arguments.
    interface TestArguments {
        projectPath: string;
        scheme: string;
        testIdentifier?: string;
        skipTests?: string[];
        configuration?: string;
        destination?: string;
    }
    
    interface BuildOptions extends BuildArguments {
        includeWarnings?: boolean;
    }
    
    interface TestOptions extends TestArguments {
        includeWarnings?: boolean;
    }
    
    // Add these type guard functions
    function isBuildOptions(args: unknown): args is BuildOptions {
        if (!isBuildArguments(args)) return false;
        const a = args as Partial<BuildOptions>;
        return a.includeWarnings === undefined || typeof a.includeWarnings === 'boolean';
    }
    
    function isTestOptions(args: unknown): args is TestOptions {
        if (!isTestArguments(args)) return false;
        const a = args as Partial<TestOptions>;
        return a.includeWarnings === undefined || typeof a.includeWarnings === 'boolean';
    }
    
    function isTestArguments(args: unknown): args is TestArguments {
        if (typeof args !== 'object' || args === null) return false;
        const a = args as Partial<TestArguments>;
        return (
            typeof a.projectPath === 'string' &&
            typeof a.scheme === 'string' &&
            (a.testIdentifier === undefined || typeof a.testIdentifier === 'string') &&
            (a.skipTests === undefined || (Array.isArray(a.skipTests) && a.skipTests.every(t => typeof t === 'string'))) &&
            (a.configuration === undefined || typeof a.configuration === 'string') &&
            (a.destination === undefined || typeof a.destination === 'string')
        );
    }
  • MCP CallToolRequestSchema dispatch case for "run_tests": validates arguments and invokes the runTests handler.
    case "run_tests": {
        if (!isTestArguments(request.params.arguments)) {
            throw new McpError(ErrorCode.InvalidParams, "Invalid test arguments provided");
        }
        const result = await this.runTests(
            request.params.arguments.projectPath,
            request.params.arguments.scheme,
            request.params.arguments.configuration,
            request.params.arguments.testIdentifier,
            request.params.arguments.skipTests,
            request.params.arguments.destination
        );
        this.latestBuildLog = result.logPath;
        return {
            content: [{
                type: "text",
                text: result.output
            }],
            isError: !result.success
        };
    }
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It states the tool executes tests but doesn't describe what this entails—e.g., whether it runs all tests or specific ones, if it requires a simulator (like 'boot_simulator'), what output or errors to expect, or if it's a blocking/long-running operation. This leaves significant gaps in understanding the tool's behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, clear sentence with no wasted words. It's front-loaded with the core action and target, making it easy to parse quickly. Every part of the sentence earns its place by conveying essential information without redundancy.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of test execution in Xcode (which can involve simulators, dependencies, etc.), no annotations, no output schema, and a simple but incomplete description, this is inadequate. The description doesn't cover behavioral aspects, error handling, or integration with sibling tools, making it incomplete for effective agent use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage, with one optional parameter 'testPlan' documented as 'Optional name of the test plan to run.' The description adds no additional parameter semantics beyond this, so it meets the baseline of 3 by not contradicting the schema but doesn't provide extra value like examples or constraints.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Executes tests') and the target ('for the active Xcode project'), providing a specific verb+resource combination. However, it doesn't differentiate from sibling tools like 'build_project' or 'trace_app' which might also involve project execution, leaving room for ambiguity about when to choose this specific tool.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention prerequisites (e.g., needing an active project via 'get_active_project' or 'set_project_path'), exclusions, or comparisons to siblings like 'run_lldb' or 'trace_app' for debugging-related testing. This lack of context makes it harder for an AI agent to select the right tool.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/PolarVista/Xcode-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server