Skip to main content
Glama
ooples

MCP Console Automation Server

PARALLEL_AGENT_ORCHESTRATION_PLAN.mdβ€’18.7 kB
# Parallel Agent Orchestration Plan ## Test Automation Framework Implementation **Branch**: `feature/test-automation-framework` **Orchestrator**: Claude (this session) **Execution Model**: True Parallel with Expert AI Agents --- ## 🎯 Objective Implement 5 phases of test automation features to achieve parity with Playwright MCP, making Console Automation MCP the definitive non-web testing framework. --- ## πŸ“‹ Phase Overview | Phase | Feature | Agent Type | Dependencies | Est. Duration | |-------|---------|------------|--------------|---------------| | **Phase 1** | Test Recorder/Replay | Test Automation Expert | None | 2-3 days | | **Phase 2** | Assertion Framework | QA/Testing Expert | None | 1-2 days | | **Phase 3** | Test Suite Management | Test Infrastructure Expert | Phase 1, 2 | 2-3 days | | **Phase 4** | Enhanced Test Execution | Performance Expert | Phase 1, 2 | 1-2 days | | **Phase 5** | Test Data Management | Data Engineering Expert | Phase 1, 2, 3 | 1-2 days | **Critical Path**: Phase 1 & 2 must complete before Phase 3 **Parallel Execution**: Phase 1 and 2 can run simultaneously **Parallel Execution**: Phase 4 and 5 can start after Phase 1 & 2 complete --- ## πŸ—οΈ Architecture Design ### Parallel Execution Strategy ``` START β”œβ”€ Phase 1 (Test Recorder) ──────┐ β”‚ β”‚ β”œβ”€ Phase 2 (Assertions) ─────────── β”‚ β”‚ └─ Wait for Phase 1 & 2 ─────────── β”‚ β–Ό β”Œβ”€ Phase 3 (Test Suites) β”‚ β”œβ”€ Phase 4 (Execution) ─┐ β”‚ β”‚ β”œβ”€ Phase 5 (Data Mgmt) ─── β”‚ β”‚ └─ Integration & Verify β”€β”˜ ``` --- ## πŸ“¦ Phase 1: Test Recorder/Replay ### Agent Assignment: **Test Automation Expert** ### Deliverables #### 1.1 Test Recorder Core **Files to Create**: - `src/testing/TestRecorder.ts` - Main recorder class - `src/testing/TestRecorderManager.ts` - Recording session manager - `src/types/test-recording.ts` - TypeScript types **Functionality**: ```typescript interface TestRecording { name: string; version: string; createdAt: string; duration: number; steps: RecordingStep[]; metadata: RecordingMetadata; } interface RecordingStep { type: 'create_session' | 'send_input' | 'send_key' | 'wait_for_output' | 'assert'; timestamp: number; data: any; output?: string; } ``` #### 1.2 Test Replay Engine **Files to Create**: - `src/testing/TestReplayEngine.ts` - Replay execution engine - `src/testing/TestPlayer.ts` - Step-by-step player **Functionality**: - Load recorded tests (JSON/YAML) - Execute steps in sequence with timing - Capture replay results - Support replay speed control (1x, 2x, fast-forward) #### 1.3 MCP Tools **Add to `src/mcp/server.ts`**: ```typescript case 'console_start_recording': // Start recording console interactions case 'console_stop_recording': // Stop and save recording case 'console_list_recordings': // List saved test recordings case 'console_replay_test': // Replay a saved test case 'console_generate_test_code': // Generate code (JS/TS/Python) from recording ``` #### 1.4 Test Code Generator **Files to Create**: - `src/testing/CodeGenerator.ts` - Code generation from recordings - `src/testing/templates/` - Code templates for JS/TS/Python **Output Formats**: ```javascript // Generated JavaScript test const { ConsoleAutomation } = require('@mcp/console-automation'); async function deployTest() { const automation = new ConsoleAutomation(); const session = await automation.createSession({ command: 'ssh server' }); await session.sendInput('cd /app && ./deploy.sh'); await session.waitForOutput({ pattern: 'Deployment complete' }); console.log('Test passed!'); } ``` ### Verification Requirements - [ ] Can start/stop recording - [ ] Recording captures all console interactions with timestamps - [ ] Recordings saved to `data/recordings/*.json` - [ ] Can replay recorded test successfully - [ ] Generated code executes correctly - [ ] All 5 MCP tools work via Claude Code - [ ] Unit tests for TestRecorder (90% coverage) - [ ] Integration test: Record β†’ Replay β†’ Verify ### Success Criteria ```bash # Agent must prove: npm run test:phase1 # All tests pass (100%) # Manual verification via Claude Code: mcp__console-automation__console_start_recording mcp__console-automation__console_stop_recording mcp__console-automation__console_replay_test # All return success ``` --- ## πŸ“¦ Phase 2: Assertion Framework ### Agent Assignment: **QA/Testing Expert** ### Deliverables #### 2.1 Assertion Engine **Files to Create**: - `src/testing/AssertionEngine.ts` - Core assertion logic - `src/testing/Matchers.ts` - Matcher library - `src/types/assertions.ts` - TypeScript types **Assertion Types**: ```typescript interface Assertion { type: 'output_contains' | 'output_matches' | 'exit_code' | 'no_errors' | 'state_equals'; expected: any; actual?: any; result?: 'pass' | 'fail'; message?: string; } // Matchers export const matchers = { toContain(actual: string, expected: string): boolean; toMatch(actual: string, pattern: RegExp): boolean; toEqual(actual: any, expected: any): boolean; toBeGreaterThan(actual: number, expected: number): boolean; // ... more matchers }; ``` #### 2.2 Snapshot System **Files to Create**: - `src/testing/SnapshotManager.ts` - State snapshot capture - `src/testing/SnapshotDiffer.ts` - Diff comparison engine **Functionality**: ```typescript interface SessionSnapshot { sessionId: string; timestamp: number; output: string; state: SessionState; metadata: any; } // Diff output: { added: string[]; removed: string[]; changed: string[]; identical: boolean; } ``` #### 2.3 MCP Tools **Add to `src/mcp/server.ts`**: ```typescript case 'console_assert_output': // Assert output contains/matches pattern case 'console_assert_exit_code': // Assert command exit code case 'console_assert_no_errors': // Assert no errors in output case 'console_save_snapshot': // Save session state snapshot case 'console_compare_snapshots': // Compare two snapshots case 'console_assert_state': // Assert session state matches expected ``` ### Verification Requirements - [ ] All 6 assertion tools work - [ ] Assertions throw errors on failure (with clear messages) - [ ] Snapshots saved to `data/snapshots/*.json` - [ ] Snapshot diff shows added/removed/changed lines - [ ] Matchers library covers 15+ common patterns - [ ] Unit tests for all matchers (100% coverage) - [ ] Integration test: Create session β†’ Snapshot β†’ Assert β†’ Pass/Fail ### Success Criteria ```bash npm run test:phase2 # All tests pass (100%) # Manual verification: mcp__console-automation__console_assert_output (should pass) mcp__console-automation__console_assert_output (should fail - throws error) mcp__console-automation__console_compare_snapshots # Verify error messages are clear and helpful ``` --- ## πŸ“¦ Phase 3: Test Suite Management ### Agent Assignment: **Test Infrastructure Expert** ### Dependencies: Phase 1 & 2 must be complete ### Deliverables #### 3.1 Test Suite System **Files to Create**: - `src/testing/TestSuiteManager.ts` - Suite orchestration - `src/testing/TestSuite.ts` - Suite definition - `src/testing/TestRunner.ts` - Suite execution engine **Suite Definition**: ```typescript interface TestSuite { name: string; description: string; tests: TestDefinition[]; setup?: TestHook; teardown?: TestHook; config: SuiteConfig; } interface TestDefinition { name: string; recording?: string; assertions: Assertion[]; timeout: number; retry: number; } ``` #### 3.2 Test Reporter **Files to Create**: - `src/testing/TestReporter.ts` - Result reporting - `src/testing/reporters/HTMLReporter.ts` - HTML report generator - `src/testing/reporters/JSONReporter.ts` - JSON report generator - `src/testing/reporters/JUnitReporter.ts` - JUnit XML (CI/CD) **Report Format**: ```typescript interface TestSuiteResult { suiteName: string; totalTests: number; passed: number; failed: number; skipped: number; duration: number; tests: TestResult[]; } interface TestResult { name: string; status: 'pass' | 'fail' | 'skip'; duration: number; error?: Error; assertions: AssertionResult[]; } ``` #### 3.3 MCP Tools **Add to `src/mcp/server.ts`**: ```typescript case 'console_create_test_suite': // Create new test suite case 'console_add_test_to_suite': // Add test to suite case 'console_run_test_suite': // Execute test suite case 'console_get_test_results': // Get suite results case 'console_generate_test_report': // Generate HTML/JSON/JUnit report case 'console_list_test_suites': // List all suites ``` ### Verification Requirements - [ ] Can create test suite with multiple tests - [ ] Suite runs all tests in order - [ ] Supports setup/teardown hooks - [ ] Generates HTML report with pass/fail visualization - [ ] Generates JUnit XML for CI/CD - [ ] Failed tests show clear error messages with stack traces - [ ] Unit tests for TestSuiteManager (90% coverage) - [ ] Integration test: Create suite β†’ Add 5 tests β†’ Run β†’ Generate report ### Success Criteria ```bash npm run test:phase3 # All tests pass (100%) # Manual verification: mcp__console-automation__console_create_test_suite mcp__console-automation__console_run_test_suite mcp__console-automation__console_generate_test_report # Verify HTML report opens in browser with green/red indicators ``` --- ## πŸ“¦ Phase 4: Enhanced Test Execution ### Agent Assignment: **Performance Expert** ### Dependencies: Phase 1 & 2 must be complete ### Deliverables #### 4.1 Parallel Execution Engine **Files to Create**: - `src/testing/ParallelExecutor.ts` - Parallel test runner - `src/testing/WorkerPool.ts` - Worker thread pool **Functionality**: ```typescript interface ParallelExecutionConfig { maxWorkers: number; timeout: number; isolateTests: boolean; failFast: boolean; } // Execute tests in parallel using worker threads async runParallel(tests: TestDefinition[], config: ParallelExecutionConfig): Promise<TestResult[]> ``` #### 4.2 Retry & Flake Detection **Files to Create**: - `src/testing/RetryManager.ts` - Retry logic with backoff - `src/testing/FlakeDetector.ts` - Flaky test detection **Functionality**: ```typescript interface RetryConfig { maxRetries: number; backoffMs: number; retryOnErrors: string[]; } interface FlakeReport { testName: string; totalRuns: number; passes: number; failures: number; flakeRate: number; // 0-1 isFlaky: boolean; // true if > 10% flake rate } ``` #### 4.3 MCP Tools **Add to `src/mcp/server.ts`**: ```typescript case 'console_run_test_parallel': // Run multiple tests in parallel case 'console_retry_failed_tests': // Auto-retry failed tests case 'console_detect_flaky_tests': // Run test 10x, detect flakes case 'console_set_test_timeout': // Configure global timeout case 'console_get_execution_metrics': // Get perf metrics (duration, memory, etc.) ``` ### Verification Requirements - [ ] Can run 10 tests in parallel (< 5 workers) - [ ] Parallel execution is 3-5x faster than sequential - [ ] Retry logic works (max 3 retries with exponential backoff) - [ ] Flake detector identifies unstable tests (> 10% failure rate) - [ ] Execution metrics track CPU/memory/duration - [ ] Unit tests for ParallelExecutor (85% coverage) - [ ] Performance test: 50 tests in parallel < 30 seconds ### Success Criteria ```bash npm run test:phase4 npm run test:phase4:performance # All tests pass, parallel execution 3x faster # Manual verification: mcp__console-automation__console_run_test_parallel mcp__console-automation__console_detect_flaky_tests # Verify parallel execution works and flake detection accurate ``` --- ## πŸ“¦ Phase 5: Test Data Management ### Agent Assignment: **Data Engineering Expert** ### Dependencies: Phase 1, 2, 3 must be complete ### Deliverables #### 5.1 Test Data Loader **Files to Create**: - `src/testing/TestDataLoader.ts` - Load fixtures from files - `src/testing/DataParameterizer.ts` - Parameterized test runner **Functionality**: ```typescript interface TestFixture { name: string; data: any; format: 'json' | 'yaml' | 'csv'; } // Load from data/fixtures/*.json async loadFixture(name: string): Promise<any> // Run test with multiple data sets async runParameterized(test: TestDefinition, datasets: any[]): Promise<TestResult[]> ``` #### 5.2 Environment Setup/Teardown **Files to Create**: - `src/testing/TestEnvironment.ts` - Env management - `src/testing/FixtureManager.ts` - Setup/teardown orchestration **Functionality**: ```typescript interface TestEnvironment { setup(): Promise<void>; // Before test teardown(): Promise<void>; // After test seed(data: any): Promise<void>; // Load fixtures } // Example usage: beforeEach: async () => { await env.setup(); await env.seed({ users: [...], servers: [...] }); } afterEach: async () => { await env.teardown(); await env.cleanup(); } ``` #### 5.3 MCP Tools **Add to `src/mcp/server.ts`**: ```typescript case 'console_load_test_data': // Load fixture from data/fixtures/ case 'console_parameterize_test': // Run test with multiple datasets case 'console_setup_test_env': // Run setup before test case 'console_teardown_test_env': // Run cleanup after test case 'console_seed_test_data': // Seed environment with data ``` ### Verification Requirements - [ ] Can load JSON/YAML/CSV fixtures - [ ] Parameterized tests run once per dataset - [ ] Setup/teardown hooks execute in correct order - [ ] Test data seeding works (create users, files, etc.) - [ ] Cleanup removes all test data - [ ] Unit tests for TestDataLoader (90% coverage) - [ ] Integration test: Load fixture β†’ Seed β†’ Run test β†’ Teardown ### Success Criteria ```bash npm run test:phase5 # All tests pass (100%) # Manual verification: mcp__console-automation__console_load_test_data mcp__console-automation__console_parameterize_test # Verify test runs 5x with different data ``` --- ## πŸ” Agent Communication Protocol ### 1. Agent Initialization Each agent receives: ```json { "phase": "Phase 1", "role": "Test Automation Expert", "branch": "feature/test-automation-framework", "baseCommit": "<commit-hash>", "deliverables": [...], "verificationRequirements": [...], "successCriteria": "..." } ``` ### 2. Progress Updates Agents report every 30 minutes: ```json { "phase": "Phase 1", "status": "in_progress", "completion": "45%", "filesCreated": [...], "testsWritten": 12, "currentTask": "Implementing TestReplayEngine", "blockers": [] } ``` ### 3. Verification Protocol Before declaring completion: 1. **Self-Test**: Agent runs `npm run test:phase<N>` 2. **Integration Test**: Agent creates demo test and runs it 3. **Documentation**: Agent updates `docs/TEST_AUTOMATION.md` 4. **Proof of Work**: Agent provides: - Test output (all green) - Demo recording + replay - Generated report (HTML/JSON) - Code coverage report ### 4. Orchestrator Review Claude (me) validates: - [ ] All files created and properly structured - [ ] TypeScript compiles (`npm run typecheck`) - [ ] Lint passes (`npm run lint`) - [ ] All tests pass (`npm run test:phase<N>`) - [ ] Manual MCP tool verification via Claude Code - [ ] Code quality meets standards (no TODO/FIXME) - [ ] Documentation complete and accurate ### 5. Sign-Off Process Agent submits: ```markdown # Phase N Completion Report ## Summary - Files Created: 12 - Tests Written: 45 (100% coverage) - MCP Tools: 6 - Documentation: Complete ## Verification Results βœ… Unit Tests: 45/45 passed βœ… Integration Tests: 12/12 passed βœ… TypeScript: No errors βœ… Lint: No warnings βœ… Manual Verification: All tools work in Claude Code ## Artifacts - Test Report: data/reports/phase1-report.html - Coverage: 94% - Demo Recording: data/recordings/demo.json - Demo Replay: βœ… Passed ## Blockers None ## Ready for Integration: YES ``` --- ## πŸ“Š Orchestrator Responsibilities ### My Role as Orchestrator: 1. **Launch Agents**: Create 5 parallel agent sessions 2. **Monitor Progress**: Track each phase completion % 3. **Validate Work**: Review code, run tests, verify quality 4. **Resolve Conflicts**: Handle merge conflicts between phases 5. **Integration**: Merge all phases into feature branch 6. **Final Verification**: Run full test suite across all phases 7. **PR Creation**: Create PR to master with comprehensive summary ### Quality Gates: - No agent proceeds past 50% without validation - No agent declares "done" without passing all success criteria - All agents must prove their work via automated tests - Manual verification required for all MCP tools --- ## πŸš€ Execution Timeline ### Week 1: Foundation (Phase 1 & 2 Parallel) - **Day 1-2**: Agent 1 (Test Recorder) + Agent 2 (Assertions) - **Day 3**: Orchestrator validation + merge ### Week 2: Infrastructure (Phase 3) - **Day 4-6**: Agent 3 (Test Suites) builds on Phase 1 & 2 - **Day 7**: Orchestrator validation + merge ### Week 3: Enhancement (Phase 4 & 5 Parallel) - **Day 8-9**: Agent 4 (Execution) + Agent 5 (Data Mgmt) - **Day 10**: Orchestrator validation + merge ### Week 4: Integration & Launch - **Day 11**: Full integration testing - **Day 12**: Documentation & examples - **Day 13**: PR creation & review - **Day 14**: Merge to master + release --- ## πŸ“ Success Metrics ### Quantitative: - [ ] 50+ new MCP tools added - [ ] 200+ unit tests (90% coverage) - [ ] 50+ integration tests - [ ] 10+ example test suites - [ ] Full Playwright MCP parity achieved ### Qualitative: - [ ] Users can record/replay tests via Claude Code - [ ] Test reports are professional and actionable - [ ] Parallel execution significantly faster (3-5x) - [ ] Flake detection prevents false failures - [ ] Data-driven testing is simple and powerful --- ## 🎯 Final Deliverable A production-ready test automation framework that makes Console Automation MCP the definitive testing tool for non-web protocols (SSH, Docker, Kubernetes, databases, etc.) - **equivalent to Playwright MCP for browsers**. --- **Status**: Ready to Launch Agents **Next Step**: Orchestrator creates 5 agent sessions with detailed specifications **Branch**: `feature/test-automation-framework` **Target Completion**: 2-3 weeks

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ooples/mcp-console-automation'

If you have feedback or need assistance with the MCP directory API, please join our Discord server