Skip to main content
Glama

xcode_test

Run Xcode project tests with optional filtering by specific tests or classes using temporary test plan modifications. Specify destination for consistent test environments.

Instructions

Run tests for a specific project. Optionally run only specific tests or test classes by temporarily modifying the test plan (automatically restored after completion). ⏱️ Can take minutes to hours - do not timeout.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
xcodeprojYesAbsolute path to the .xcodeproj file (or .xcworkspace if available) - e.g., /path/to/project.xcodeproj
destinationYesTest destination (required for predictable test environments) - e.g., "iPhone 15 Pro Simulator", "iPad Air Simulator"
command_line_argumentsNoAdditional command line arguments
test_plan_pathNoOptional: Absolute path to .xctestplan file to temporarily modify for selective test execution
selected_testsNoOptional: Array of specific test identifiers to run. Format depends on test framework: XCTest: "TestAppUITests/testExample" (no parentheses), Swift Testing: "TestAppTests/example". Requires test_plan_path.
selected_test_classesNoOptional: Array of test class names to run (e.g., ["TestAppTests", "TestAppUITests"]). This runs ALL tests in the specified classes. Requires test_plan_path.
test_target_identifierNoOptional: Target identifier for the test target (required when using test filtering). Can be found in project.pbxproj.
test_target_nameNoOptional: Target name for the test target (alternative to test_target_identifier). Example: "TestAppTests".

Implementation Reference

  • Primary handler function implementing the core logic for executing Xcode tests. Handles project opening, destination setting, test plan modification for selective testing, JXA execution, build monitoring, XCResult detection and analysis, error handling, and result formatting.
    public static async test( projectPath: string, destination: string, commandLineArguments: string[] = [], openProject: OpenProjectCallback, options?: { testPlanPath?: string; selectedTests?: string[]; selectedTestClasses?: string[]; testTargetIdentifier?: string; testTargetName?: string; } ): Promise<McpResult> { const validationError = PathValidator.validateProjectPath(projectPath); if (validationError) return validationError; await openProject(projectPath); // Set the destination for testing { // Normalize the destination name for better matching const normalizedDestination = ParameterNormalizer.normalizeDestinationName(destination); const setDestinationScript = ` (function() { ${getWorkspaceByPathScript(projectPath)} const destinations = workspace.runDestinations(); const destinationNames = destinations.map(dest => dest.name()); // Try exact match first let targetDestination = destinations.find(dest => dest.name() === ${JSON.stringify(normalizedDestination)}); // If not found, try original name if (!targetDestination) { targetDestination = destinations.find(dest => dest.name() === ${JSON.stringify(destination)}); } if (!targetDestination) { throw new Error('Destination not found. Available: ' + JSON.stringify(destinationNames)); } workspace.activeRunDestination = targetDestination; return 'Destination set to ' + targetDestination.name(); })() `; try { await JXAExecutor.execute(setDestinationScript); } catch (error) { const errorMessage = error instanceof Error ? error.message : String(error); if (errorMessage.includes('Destination not found')) { // Extract available destinations from error message try { const availableMatch = errorMessage.match(/Available: (\[.*\])/); if (availableMatch) { const availableDestinations = JSON.parse(availableMatch[1]!); const bestMatch = ParameterNormalizer.findBestMatch(destination, availableDestinations); let message = `❌ Destination '${destination}' not found\n\nAvailable destinations:\n`; availableDestinations.forEach((dest: string) => { if (dest === bestMatch) { message += ` • ${dest} ← Did you mean this?\n`; } else { message += ` • ${dest}\n`; } }); return { content: [{ type: 'text', text: message }] }; } } catch { // Fall through to generic error } } return { content: [{ type: 'text', text: `Failed to set destination '${destination}': ${errorMessage}` }] }; } } // Handle test plan modification if selective tests are requested let originalTestPlan: string | null = null; let shouldRestoreTestPlan = false; if (options?.testPlanPath && (options?.selectedTests?.length || options?.selectedTestClasses?.length)) { if (!options.testTargetIdentifier && !options.testTargetName) { return { content: [{ type: 'text', text: 'Error: either test_target_identifier or test_target_name is required when using test filtering' }] }; } // If target name is provided but no identifier, look up the identifier let targetIdentifier = options.testTargetIdentifier; let targetName = options.testTargetName; if (options.testTargetName && !options.testTargetIdentifier) { try { const { ProjectTools } = await import('./ProjectTools.js'); const targetInfo = await ProjectTools.getTestTargets(projectPath); // Parse the target info to find the identifier const targetText = targetInfo.content?.[0]?.type === 'text' ? targetInfo.content[0].text : ''; const namePattern = new RegExp(`\\*\\*${options.testTargetName.replace(/[.*+?^${}()|[\]\\]/g, '\\$&')}\\*\\*\\s*\\n\\s*•\\s*Identifier:\\s*([A-F0-9]{24})`, 'i'); const match = targetText.match(namePattern); if (match && match[1]) { targetIdentifier = match[1]; targetName = options.testTargetName; } else { return { content: [{ type: 'text', text: `Error: Test target '${options.testTargetName}' not found. Use 'xcode_get_test_targets' to see available targets.` }] }; } } catch (lookupError) { return { content: [{ type: 'text', text: `Error: Failed to lookup test target '${options.testTargetName}': ${lookupError instanceof Error ? lookupError.message : String(lookupError)}` }] }; } } try { // Import filesystem operations const { promises: fs } = await import('fs'); // Backup original test plan originalTestPlan = await fs.readFile(options.testPlanPath, 'utf8'); shouldRestoreTestPlan = true; // Build selected tests array let selectedTests: string[] = []; // Add individual tests if (options.selectedTests?.length) { selectedTests.push(...options.selectedTests); } // Add all tests from selected test classes if (options.selectedTestClasses?.length) { // For now, add the class names - we'd need to scan for specific test methods later selectedTests.push(...options.selectedTestClasses); } // Get project name from path for container reference const { basename } = await import('path'); const projectName = basename(projectPath, '.xcodeproj'); // Create test target configuration const testTargets = [{ target: { containerPath: `container:${projectName}.xcodeproj`, identifier: targetIdentifier!, name: targetName || targetIdentifier! }, selectedTests: selectedTests }]; // Update test plan temporarily const { TestPlanTools } = await import('./TestPlanTools.js'); await TestPlanTools.updateTestPlanAndReload( options.testPlanPath, projectPath, testTargets ); Logger.info(`Temporarily modified test plan to run ${selectedTests.length} selected tests`); } catch (error) { const errorMsg = error instanceof Error ? error.message : String(error); return { content: [{ type: 'text', text: `Failed to modify test plan: ${errorMsg}` }] }; } } // Get initial xcresult files to detect new ones const initialXCResults = await this._findXCResultFiles(projectPath); const testStartTime = Date.now(); Logger.info(`Test start time: ${new Date(testStartTime).toISOString()}, found ${initialXCResults.length} initial XCResult files`); // Start the test action const hasArgs = commandLineArguments && commandLineArguments.length > 0; const script = ` (function() { ${getWorkspaceByPathScript(projectPath)} ${hasArgs ? `const actionResult = workspace.test({withCommandLineArguments: ${JSON.stringify(commandLineArguments)}});` : `const actionResult = workspace.test();` } // Return immediately - we'll monitor the build separately return JSON.stringify({ actionId: actionResult.id(), message: 'Test started' }); })() `; try { const startResult = await JXAExecutor.execute(script); const { actionId, message } = JSON.parse(startResult); Logger.info(`${message} with action ID: ${actionId}`); // Check for and handle "replace existing build" alert await this._handleReplaceExistingBuildAlert(); // Check for build errors with polling approach Logger.info('Monitoring for build logs...'); // Poll for build logs for up to 30 seconds let foundLogs = false; for (let i = 0; i < 6; i++) { await new Promise(resolve => setTimeout(resolve, 5000)); const logs = await BuildLogParser.getRecentBuildLogs(projectPath, testStartTime); if (logs.length > 0) { Logger.info(`Found ${logs.length} build logs after ${(i + 1) * 5} seconds`); foundLogs = true; break; } Logger.info(`No logs found after ${(i + 1) * 5} seconds, continuing to wait...`); } if (!foundLogs) { Logger.info('No build logs found after 30 seconds - build may not have started yet'); } Logger.info('Build monitoring complete, proceeding to analysis...'); // Get ALL recent build logs for analysis (test might create multiple logs) Logger.info(`DEBUG: testStartTime = ${testStartTime} (${new Date(testStartTime)})`); Logger.info(`DEBUG: projectPath = ${projectPath}`); // First check if we can find DerivedData const derivedData = await BuildLogParser.findProjectDerivedData(projectPath); Logger.info(`DEBUG: derivedData = ${derivedData}`); const recentLogs = await BuildLogParser.getRecentBuildLogs(projectPath, testStartTime); Logger.info(`DEBUG: recentLogs.length = ${recentLogs.length}`); if (recentLogs.length > 0) { Logger.info(`Analyzing ${recentLogs.length} recent build logs created during test...`); let totalErrors: string[] = []; let totalWarnings: string[] = []; let hasStoppedBuild = false; // Analyze each recent log to catch build errors in any of them for (const log of recentLogs) { try { Logger.info(`Analyzing build log: ${log.path}`); const results = await BuildLogParser.parseBuildLog(log.path); Logger.info(`Log analysis: ${results.errors.length} errors, ${results.warnings.length} warnings, status: ${results.buildStatus || 'unknown'}`); // Check for stopped builds if (results.buildStatus === 'stopped') { hasStoppedBuild = true; } // Accumulate errors and warnings from all logs totalErrors.push(...results.errors); totalWarnings.push(...results.warnings); } catch (error) { Logger.warn(`Failed to parse build log ${log.path}: ${error instanceof Error ? error.message : error}`); } } Logger.info(`Total build analysis: ${totalErrors.length} errors, ${totalWarnings.length} warnings, stopped builds: ${hasStoppedBuild}`); Logger.info(`DEBUG: totalErrors = ${JSON.stringify(totalErrors)}`); Logger.info(`DEBUG: totalErrors.length = ${totalErrors.length}`); Logger.info(`DEBUG: totalErrors.length > 0 = ${totalErrors.length > 0}`); Logger.info(`DEBUG: hasStoppedBuild = ${hasStoppedBuild}`); // Handle stopped builds first if (hasStoppedBuild && totalErrors.length === 0) { let message = `⏹️ TEST BUILD INTERRUPTED${hasArgs ? ` (test with arguments ${JSON.stringify(commandLineArguments)})` : ''}\n\nThe build was stopped or interrupted before completion.\n\n💡 This may happen when:\n • The build was cancelled manually\n • Xcode was closed during the build\n • System resources were exhausted\n\nTry running the test again to complete it.`; return { content: [{ type: 'text', text: message }] }; } if (totalErrors.length > 0) { let message = `❌ TEST BUILD FAILED (${totalErrors.length} errors)\n\nERRORS:\n`; totalErrors.forEach(error => { message += ` • ${error}\n`; Logger.error('Test build error:', error); }); if (totalWarnings.length > 0) { message += `\n⚠️ WARNINGS (${totalWarnings.length}):\n`; totalWarnings.slice(0, 10).forEach(warning => { message += ` • ${warning}\n`; Logger.warn('Test build warning:', warning); }); if (totalWarnings.length > 10) { message += ` ... and ${totalWarnings.length - 10} more warnings\n`; } } Logger.error('ABOUT TO THROW McpError for test build failure'); throw new McpError(ErrorCode.InternalError, message); } else if (totalWarnings.length > 0) { Logger.warn(`Test build completed with ${totalWarnings.length} warnings`); totalWarnings.slice(0, 10).forEach(warning => { Logger.warn('Test build warning:', warning); }); if (totalWarnings.length > 10) { Logger.warn(`... and ${totalWarnings.length - 10} more warnings`); } } } else { Logger.info(`DEBUG: No recent build logs found since ${new Date(testStartTime)}`); } // Since build passed, now wait for test execution to complete Logger.info('Build succeeded, waiting for test execution to complete...'); // Monitor test completion with proper AppleScript checking and 6-hour safety timeout const maxTestTime = 21600000; // 6 hours safety timeout let testCompleted = false; let monitoringSeconds = 0; Logger.info('Monitoring test completion with 6-hour safety timeout...'); while (!testCompleted && (Date.now() - testStartTime) < maxTestTime) { try { // Check test completion via AppleScript every 30 seconds const checkScript = ` (function() { ${getWorkspaceByPathScript(projectPath)} if (!workspace) return 'No workspace'; const actions = workspace.schemeActionResults(); for (let i = 0; i < actions.length; i++) { const action = actions[i]; if (action.id() === "${actionId}") { const status = action.status(); const completed = action.completed(); return status + ':' + completed; } } return 'Action not found'; })() `; const result = await JXAExecutor.execute(checkScript, 15000); const [status, completed] = result.split(':'); // Log progress every 2 minutes if (monitoringSeconds % 120 === 0) { Logger.info(`Test monitoring: ${Math.floor(monitoringSeconds/60)}min - Action ${actionId}: status=${status}, completed=${completed}`); } // Check if test is complete if (completed === 'true' && (status === 'succeeded' || status === 'failed' || status === 'cancelled' || status === 'error occurred')) { testCompleted = true; Logger.info(`Test completed after ${Math.floor(monitoringSeconds/60)} minutes: status=${status}`); break; } } catch (error) { Logger.warn(`Test monitoring error at ${Math.floor(monitoringSeconds/60)}min: ${error instanceof Error ? error.message : error}`); } // Wait 30 seconds before next check await new Promise(resolve => setTimeout(resolve, 30000)); monitoringSeconds += 30; } if (!testCompleted) { Logger.warn('Test monitoring reached 6-hour timeout - proceeding anyway'); } Logger.info('Test monitoring result: Test completion detected or timeout reached'); // Only AFTER test completion is confirmed, look for the xcresult file Logger.info('Test execution completed, now looking for XCResult file...'); let newXCResult = await this._findNewXCResultFile(projectPath, initialXCResults, testStartTime); // If no xcresult found yet, wait for it to appear (should be quick now that tests are done) if (!newXCResult) { Logger.info('No xcresult file found yet, waiting for it to appear...'); let attempts = 0; const maxWaitAttempts = 15; // 15 seconds to find the file after test completion while (attempts < maxWaitAttempts && !newXCResult) { await new Promise(resolve => setTimeout(resolve, 1000)); newXCResult = await this._findNewXCResultFile(projectPath, initialXCResults, testStartTime); attempts++; } // If still no XCResult found, the test likely didn't run at all if (!newXCResult) { Logger.warn('No XCResult file found - test may not have run or current scheme has no tests'); return { content: [{ type: 'text', text: `⚠️ TEST EXECUTION UNCLEAR\n\nNo XCResult file was created, which suggests:\n• The current scheme may not have test targets configured\n• Tests may have been skipped\n• There may be configuration issues\n\n💡 Try:\n• Use a scheme with test targets (look for schemes ending in '-Tests')\n• Check that the project has test targets configured\n• Run tests manually in Xcode first to verify setup\n\nAvailable schemes: Use 'xcode_get_schemes' to see all schemes` }] }; } } let testResult: { status: string, error: string | undefined } = { status: 'completed', error: undefined }; if (newXCResult) { Logger.info(`Found xcresult file: ${newXCResult}, waiting for it to be fully written...`); // Calculate how long the test took const testEndTime = Date.now(); const testDurationMs = testEndTime - testStartTime; const testDurationMinutes = Math.round(testDurationMs / 60000); // Wait 8% of test duration before even attempting to read XCResult // This gives Xcode plenty of time to finish writing everything const proportionalWaitMs = Math.round(testDurationMs * 0.08); const proportionalWaitSeconds = Math.round(proportionalWaitMs / 1000); Logger.info(`Test ran for ${testDurationMinutes} minutes`); Logger.info(`Applying 8% wait time: ${proportionalWaitSeconds} seconds before checking XCResult`); Logger.info(`This prevents premature reads that could contribute to file corruption`); await new Promise(resolve => setTimeout(resolve, proportionalWaitMs)); // Now use the robust waiting method with the test duration for context const isReady = await XCResultParser.waitForXCResultReadiness(newXCResult, testDurationMs); // Pass test duration for proportional timeouts if (isReady) { // File is ready, verify analysis works try { Logger.info('XCResult file is ready, performing final verification...'); const parser = new XCResultParser(newXCResult); const analysis = await parser.analyzeXCResult(); if (analysis && analysis.totalTests >= 0) { Logger.info(`XCResult parsing successful! Found ${analysis.totalTests} tests`); testResult = { status: 'completed', error: undefined }; } else { Logger.error('XCResult parsed but incomplete test data found'); testResult = { status: 'failed', error: `XCResult file exists but contains incomplete test data. This may indicate an Xcode bug.` }; } } catch (parseError) { Logger.error(`XCResult file appears to be corrupt: ${parseError instanceof Error ? parseError.message : parseError}`); testResult = { status: 'failed', error: `XCResult file is corrupt or unreadable. This is likely an Xcode bug. Parse error: ${parseError instanceof Error ? parseError.message : parseError}` }; } } else { Logger.error('XCResult file failed to become ready within 3 minutes'); testResult = { status: 'failed', error: `XCResult file failed to become readable within 3 minutes despite multiple verification attempts. This indicates an Xcode bug where the file remains corrupt or incomplete.` }; } } else { Logger.warn('No xcresult file found after test completion'); testResult = { status: 'completed', error: 'No XCResult file found' }; } if (newXCResult) { Logger.info(`Found xcresult: ${newXCResult}`); // Check if the xcresult file is corrupt if (testResult.status === 'failed' && testResult.error) { // XCResult file is corrupt let message = `❌ XCODE BUG DETECTED${hasArgs ? ` (test with arguments ${JSON.stringify(commandLineArguments)})` : ''}\n\n`; message += `XCResult Path: ${newXCResult}\n\n`; message += `⚠️ ${testResult.error}\n\n`; message += `This is a known Xcode issue where the .xcresult file becomes corrupt even though Xcode reports test completion.\n\n`; message += `💡 Troubleshooting steps:\n`; message += ` 1. Restart Xcode and retry\n`; message += ` 2. Delete DerivedData and retry\n\n`; message += `The corrupt XCResult file is at:\n${newXCResult}`; return { content: [{ type: 'text', text: message }] }; } // We already confirmed the xcresult is readable in our completion detection loop // No need to wait again - proceed directly to analysis if (testResult.status === 'completed') { try { // Use shared utility to format test results with individual test details const parser = new XCResultParser(newXCResult); const testSummary = await parser.formatTestResultsSummary(true, 5); let message = `🧪 TESTS COMPLETED${hasArgs ? ` with arguments ${JSON.stringify(commandLineArguments)}` : ''}\n\n`; message += `XCResult Path: ${newXCResult}\n`; message += testSummary + `\n\n`; const analysis = await parser.analyzeXCResult(); if (analysis.failedTests > 0) { message += `💡 Inspect test results:\n`; message += ` • Browse results: xcresult-browse --xcresult-path <path>\n`; message += ` • Get console output: xcresult-browser-get-console --xcresult-path <path> --test-id <test-id>\n`; message += ` • Get screenshots: xcresult-get-screenshot --xcresult-path <path> --test-id <test-id> --timestamp <timestamp>\n`; message += ` • Get UI hierarchy: xcresult-get-ui-hierarchy --xcresult-path <path> --test-id <test-id> --timestamp <timestamp>\n`; message += ` • Get element details: xcresult-get-ui-element --hierarchy-json <hierarchy-json> --index <index>\n`; message += ` • List attachments: xcresult-list-attachments --xcresult-path <path> --test-id <test-id>\n`; message += ` • Export attachments: xcresult-export-attachment --xcresult-path <path> --test-id <test-id> --index <index>\n`; message += ` • Quick summary: xcresult-summary --xcresult-path <path>\n`; message += `\n💡 Tip: Use console output to find failure timestamps for screenshots and UI hierarchies`; } else { message += `✅ All tests passed!\n\n`; message += `💡 Explore test results:\n`; message += ` • Browse results: xcresult-browse --xcresult-path <path>\n`; message += ` • Get console output: xcresult-browser-get-console --xcresult-path <path> --test-id <test-id>\n`; message += ` • Get screenshots: xcresult-get-screenshot --xcresult-path <path> --test-id <test-id> --timestamp <timestamp>\n`; message += ` • Get UI hierarchy: xcresult-get-ui-hierarchy --xcresult-path <path> --test-id <test-id> --timestamp <timestamp>\n`; message += ` • Get element details: xcresult-get-ui-element --hierarchy-json <hierarchy-json> --index <index>\n`; message += ` • List attachments: xcresult-list-attachments --xcresult-path <path> --test-id <test-id>\n`; message += ` • Export attachments: xcresult-export-attachment --xcresult-path <path> --test-id <test-id> --index <index>\n`; message += ` • Quick summary: xcresult-summary --xcresult-path <path>`; } return await this._restoreTestPlanAndReturn({ content: [{ type: 'text', text: message }] }, shouldRestoreTestPlan, originalTestPlan, options?.testPlanPath); } catch (parseError) { Logger.warn(`Failed to parse xcresult: ${parseError}`); // Fall back to basic result let message = `🧪 TESTS COMPLETED${hasArgs ? ` with arguments ${JSON.stringify(commandLineArguments)}` : ''}\n\n`; message += `XCResult Path: ${newXCResult}\n`; message += `Status: ${testResult.status}\n\n`; message += `Note: XCResult parsing failed, but test file is available for manual inspection.\n\n`; message += `💡 Inspect test results:\n`; message += ` • Browse results: xcresult_browse <path>\n`; message += ` • Get console output: xcresult_browser_get_console <path> <test-id>\n`; message += ` • Get screenshots: xcresult_get_screenshot <path> <test-id> <timestamp>\n`; message += ` • Get UI hierarchy: xcresult_get_ui_hierarchy <path> <test-id> <timestamp>\n`; message += ` • Get element details: xcresult_get_ui_element <hierarchy-json> <index>\n`; message += ` • List attachments: xcresult_list_attachments <path> <test-id>\n`; message += ` • Export attachments: xcresult_export_attachment <path> <test-id> <index>\n`; message += ` • Quick summary: xcresult_summary <path>`; return await this._restoreTestPlanAndReturn({ content: [{ type: 'text', text: message }] }, shouldRestoreTestPlan, originalTestPlan, options?.testPlanPath); } } else { // Test completion detection timed out let message = `🧪 TESTS ${testResult.status.toUpperCase()}${hasArgs ? ` with arguments ${JSON.stringify(commandLineArguments)}` : ''}\n\n`; message += `XCResult Path: ${newXCResult}\n`; message += `Status: ${testResult.status}\n\n`; message += `⚠️ Test completion detection timed out, but XCResult file is available.\n\n`; message += `💡 Inspect test results:\n`; message += ` • Browse results: xcresult_browse <path>\n`; message += ` • Get console output: xcresult_browser_get_console <path> <test-id>\n`; message += ` • Get screenshots: xcresult_get_screenshot <path> <test-id> <timestamp>\n`; message += ` • Get UI hierarchy: xcresult_get_ui_hierarchy <path> <test-id> <timestamp>\n`; message += ` • Get element details: xcresult_get_ui_element <hierarchy-json> <index>\n`; message += ` • List attachments: xcresult_list_attachments <path> <test-id>\n`; message += ` • Export attachments: xcresult_export_attachment <path> <test-id> <index>\n`; message += ` • Quick summary: xcresult_summary <path>`; return await this._restoreTestPlanAndReturn({ content: [{ type: 'text', text: message }] }, shouldRestoreTestPlan, originalTestPlan, options?.testPlanPath); } } else { // No xcresult found - fall back to basic result if (testResult.status === 'failed') { return await this._restoreTestPlanAndReturn({ content: [{ type: 'text', text: `❌ TEST FAILED\n\n${testResult.error || 'Test execution failed'}\n\nNote: No XCResult file found for detailed analysis.` }] }, shouldRestoreTestPlan, originalTestPlan, options?.testPlanPath); } const message = `🧪 TESTS COMPLETED${hasArgs ? ` with arguments ${JSON.stringify(commandLineArguments)}` : ''}\n\nStatus: ${testResult.status}\n\nNote: No XCResult file found for detailed analysis.`; return await this._restoreTestPlanAndReturn({ content: [{ type: 'text', text: message }] }, shouldRestoreTestPlan, originalTestPlan, options?.testPlanPath); } } catch (error) { // Restore test plan even on error if (shouldRestoreTestPlan && originalTestPlan && options?.testPlanPath) { try { const { promises: fs } = await import('fs'); await fs.writeFile(options.testPlanPath, originalTestPlan, 'utf8'); Logger.info('Restored original test plan after error'); } catch (restoreError) { Logger.error(`Failed to restore test plan: ${restoreError}`); } } // Re-throw McpErrors to properly signal build failures if (error instanceof McpError) { throw error; } const enhancedError = ErrorHelper.parseCommonErrors(error as Error); if (enhancedError) { return { content: [{ type: 'text', text: enhancedError }] }; } const errorMessage = error instanceof Error ? error.message : String(error); return { content: [{ type: 'text', text: `Failed to run tests: ${errorMessage}` }] }; } }
  • Dispatch handler in main MCP server that validates parameters and delegates to BuildTools.test() implementation.
    case 'xcode_test': if (!args.xcodeproj) { throw new McpError( ErrorCode.InvalidParams, `Missing required parameter: xcodeproj\n\n💡 To fix this:\n• Specify the absolute path to your .xcodeproj or .xcworkspace file using the "xcodeproj" parameter\n• Example: /Users/username/MyApp/MyApp.xcodeproj\n• You can drag the project file from Finder to get the path` ); } if (!args.destination) { throw new McpError( ErrorCode.InvalidParams, `Missing required parameter: destination\n\n💡 To fix this:\n• Specify the test destination (e.g., "iPhone 15 Pro Simulator")\n• Use 'get-run-destinations' to see available destinations\n• Example: "iPad Air Simulator" or "iPhone 16 Pro"` ); } const testOptions: { testPlanPath?: string; selectedTests?: string[]; selectedTestClasses?: string[]; testTargetIdentifier?: string; testTargetName?: string; } = {}; if (args.test_plan_path) testOptions.testPlanPath = args.test_plan_path as string; if (args.selected_tests) testOptions.selectedTests = args.selected_tests as string[]; if (args.selected_test_classes) testOptions.selectedTestClasses = args.selected_test_classes as string[]; if (args.test_target_identifier) testOptions.testTargetIdentifier = args.test_target_identifier as string; if (args.test_target_name) testOptions.testTargetName = args.test_target_name as string; return await BuildTools.test( args.xcodeproj as string, args.destination as string, (args.command_line_arguments as string[]) || [], this.openProject.bind(this), Object.keys(testOptions).length > 0 ? testOptions : undefined );
  • Input schema definition for the xcode_test tool, including all parameters for standard and selective test execution.
    name: 'xcode_test', description: 'Run tests for a specific project. Optionally run only specific tests or test classes by temporarily modifying the test plan (automatically restored after completion). ⏱️ Can take minutes to hours - do not timeout.', inputSchema: { type: 'object', properties: { xcodeproj: { type: 'string', description: preferredXcodeproj ? `Absolute path to the .xcodeproj file (or .xcworkspace if available) - defaults to ${preferredXcodeproj}` : 'Absolute path to the .xcodeproj file (or .xcworkspace if available) - e.g., /path/to/project.xcodeproj', }, destination: { type: 'string', description: 'Test destination (required for predictable test environments) - e.g., "iPhone 15 Pro Simulator", "iPad Air Simulator"', }, command_line_arguments: { type: 'array', items: { type: 'string' }, description: 'Additional command line arguments', }, test_plan_path: { type: 'string', description: 'Optional: Absolute path to .xctestplan file to temporarily modify for selective test execution', }, selected_tests: { type: 'array', items: { type: 'string' }, description: 'Optional: Array of specific test identifiers to run. Format depends on test framework: XCTest: "TestAppUITests/testExample" (no parentheses), Swift Testing: "TestAppTests/example". Requires test_plan_path.', }, selected_test_classes: { type: 'array', items: { type: 'string' }, description: 'Optional: Array of test class names to run (e.g., ["TestAppTests", "TestAppUITests"]). This runs ALL tests in the specified classes. Requires test_plan_path.', }, test_target_identifier: { type: 'string', description: 'Optional: Target identifier for the test target (required when using test filtering). Can be found in project.pbxproj.', }, test_target_name: { type: 'string', description: 'Optional: Target name for the test target (alternative to test_target_identifier). Example: "TestAppTests".', }, }, required: preferredXcodeproj ? ['destination'] : ['xcodeproj', 'destination'], },
  • Tool registration for list tools request, dynamically loads definitions from toolDefinitions including xcode_test schema.
    this.server.setRequestHandler(ListToolsRequestSchema, async () => { const toolOptions: { includeClean: boolean; preferredScheme?: string; preferredXcodeproj?: string; } = { includeClean: this.includeClean }; if (this.preferredScheme) toolOptions.preferredScheme = this.preferredScheme; if (this.preferredXcodeproj) toolOptions.preferredXcodeproj = this.preferredXcodeproj; const toolDefinitions = getToolDefinitions(toolOptions); return { tools: toolDefinitions.map(tool => ({ name: tool.name, description: tool.description, inputSchema: tool.inputSchema })), };
  • Secondary dispatch handler in direct callToolDirect method (CLI compatibility).
    case 'xcode_test': if (!args.xcodeproj) { throw new McpError( ErrorCode.InvalidParams, `Missing required parameter: xcodeproj\n\n💡 To fix this:\n• Specify the absolute path to your .xcodeproj or .xcworkspace file using the "xcodeproj" parameter\n• Example: /Users/username/MyApp/MyApp.xcodeproj\n• You can drag the project file from Finder to get the path` ); } if (!args.destination) { throw new McpError( ErrorCode.InvalidParams, `Missing required parameter: destination\n\n💡 To fix this:\n• Specify the test destination (e.g., "iPhone 15 Pro Simulator")\n• Use 'get-run-destinations' to see available destinations\n• Example: "iPad Air Simulator" or "iPhone 16 Pro"` ); } return await BuildTools.test( args.xcodeproj as string, args.destination as string, (args.command_line_arguments as string[]) || [], this.openProject.bind(this) );

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lapfelix/XcodeMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server