task-executor
Execute specific development tasks by providing detailed implementation guidance, requirements, acceptance criteria, and code patterns for systematic project completion.
Instructions
Executes a specific task from tasks.md by providing detailed implementation guidance, requirements, acceptance criteria, and code patterns. This tool focuses on implementing one task thoroughly.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| project_path | No | Path to the project directory (defaults to current directory) | |
| task_id | Yes | Task ID to execute (e.g., T-1, T-2) | |
| update_status | No | Whether to include instructions for updating task status (default: true) |
Implementation Reference
- src/server.ts:620-874 (registration)Registers the 'task-executor' MCP tool with server.registerTool, specifying name, metadata, input schema, and handler function.server.registerTool( 'task-executor', { title: 'Task Executor', description: 'Executes a specific task from tasks.md by providing detailed implementation guidance, requirements, acceptance criteria, and code patterns. This tool focuses on implementing one task thoroughly.', inputSchema: { task_id: z.string().describe("Task ID to execute (e.g., T-1, T-2)"), project_path: z.string().optional().describe("Path to the project directory (defaults to current directory)"), update_status: z.boolean().optional().describe("Whether to include instructions for updating task status (default: true)") } }, async ({ task_id, project_path = '.', update_status = true }) => { const prompt = `# Task Execution Guide for ${task_id} ## SIMPLICITY PRINCIPLES 1. Keep outputs simple, clean, and straightforward. 2. Do not cut scope or functionality to be "simple". 3. Implement only what’s needed to satisfy acceptance criteria. 4. Prefer minimal steps and sections; avoid ceremony. 5. Reuse existing patterns; avoid new abstractions unless essential. 6. Avoid overengineering — choose the smallest design that works. 7. Be concise in wording, complete in coverage. 8. Iterate: ship minimal complete, then improve. ## ⚠️ CRITICAL: TASK COMPLETION RULES **DO NOT MARK TASK AS DONE UNLESS:** 1. ALL acceptance criteria checkboxes are marked \`[x]\` (not \`[ ]\`) 2. You have verified each checkbox is actually checked in tasks.md 3. Run task-checker tool FIRST to confirm all criteria are met 4. If ANY checkbox remains \`[ ]\`, keep status as "🟡 In Progress" 5. Count and report: "Checked X of Y acceptance criteria" ## PHASE 1: TASK RETRIEVAL 1. Read the tasks file: ${project_path}/.spec/specs/tasks.md 2. Locate Task ${task_id} and extract: - Current status (should be ⚪ Not Started or 🟡 In Progress) - Requirements and traceability links - Acceptance criteria (EARS format) - Implementation details - Dependencies - Testing requirements ## PHASE 2: PRE-EXECUTION CHECKS ### Dependency Verification Check that all tasks listed in "Blocked By" are marked as ✅ Done. If any dependencies are not complete, STOP and report the blockage. ### Context Gathering 1. Read steering documents if they exist: - ${project_path}/.spec/steering/tech.md for technology standards - ${project_path}/.spec/steering/structure.md for file organization 2. Read the tasks document: ${project_path}/.spec/specs/tasks.md - Use the Requirements section (R-X) and task details as the source of truth ${update_status ? ` ## PHASE 3: STATUS UPDATE - MARK AS IN PROGRESS Update the task status from ⚪ to 🟡: \`\`\` Edit ${project_path}/.spec/specs/tasks.md Change: **Status**: ⚪ Not Started To: **Status**: 🟡 In Progress \`\`\` ` : ''} ## PHASE 4: IMPLEMENTATION PLAN ### Files to Create From the "Files to Create" section of the task: 1. For each new file listed: - Create the file at the specified path - Implement according to the code patterns provided - Follow the project's coding conventions ### Files to Modify From the "Files to Modify" section: 1. For each existing file: - Read the current content first - Make the specified modifications - Preserve existing functionality - Follow existing code patterns ### Code Implementation Using the "Code Patterns and Examples" section: 1. Implement the interfaces/types exactly as specified 2. Follow the function signatures provided 3. Add proper error handling 4. Include necessary imports 5. Maintain consistency with existing code ## PHASE 5: ACCEPTANCE CRITERIA IMPLEMENTATION For each acceptance criterion in EARS format: - WHEN [condition] THEN THE SYSTEM SHALL [behavior] Ensure your implementation: 1. Handles the specified condition 2. Produces the expected behavior 3. Includes error cases mentioned 4. Meets performance requirements 5. Satisfies security requirements ### Implementation Checklist: □ All required files created □ All required files modified □ Each acceptance criterion has corresponding code □ Error handling implemented □ Edge cases covered □ Security considerations addressed □ Performance requirements met ## PHASE 6: TESTING IMPLEMENTATION ### Unit Tests From "Unit Tests" section: 1. Create test file at specified location 2. Implement tests for: - Happy path scenarios - Error handling - Edge cases - Validation logic - Mocked dependencies ### Integration Tests If specified: 1. Test API endpoints 2. Test database interactions 3. Test authentication/authorization 4. Test component communication ### Manual Testing Follow the "Manual Testing Checklist": 1. Verify each requirement is met 2. Test on different screen sizes (if UI) 3. Test keyboard navigation (if UI) 4. Test error scenarios 5. Verify loading states 6. Test with slow network 7. Check accessibility ## PHASE 7: CODE QUALITY CHECKS Use the documented commands from .spec/steering/tech.md to ensure quality. Specifically, look for the "Essential Commands" section (Type Check, Lint/Format, Build, Test). - If commands are documented: run them and fix issues found. - If not documented: skip running and add a TODO to update tech.md. Fix any issues before proceeding. ## PHASE 8: IMPLEMENTATION NOTES ### Key Considerations: - **Security**: Implement input validation, sanitization, authorization - **Performance**: Add caching, optimize queries, implement loading states - **Accessibility**: Include ARIA labels, keyboard navigation, screen reader support - **Mobile**: Handle touch interactions, responsive breakpoints, gestures - **Error Handling**: User-friendly messages, fallback UI, retry logic - **Logging**: Add debug info, error tracking, user action tracking ### Best Practices: 1. Follow existing code patterns in the project 2. Use existing utilities and helper functions 3. Maintain consistent naming conventions 4. Follow the project's typing conventions or type-safety practices 5. Keep functions small and focused 6. Write self-documenting code ${update_status ? ` ## PHASE 9: TASK COMPLETION Once implementation is complete and tested: **⚠️ CRITICAL VERIFICATION BEFORE MARKING DONE:** 1. First, verify ALL checkboxes are checked: - Count total acceptance criteria - Ensure EVERY single one shows \`- [x]\` (not \`- [ ]\`) - If ANY remain unchecked, DO NOT mark as Done 2. Update task status to Done ONLY if ALL checkboxes are \`[x]\`: \`\`\` Edit ${project_path}/.spec/specs/tasks.md Change: **Status**: 🟡 In Progress To: **Status**: ✅ Done \`\`\` 3. Check all acceptance criteria checkboxes: \`\`\` Change: - [ ] [Criterion] To: - [x] [Criterion] \`\`\` **RULE: Status can be "Done" ONLY when 100% of checkboxes show \`[x]\`** ` : ''} ## PHASE 10: POST-EXECUTION ### Verify Completion: 1. **ALL acceptance criteria checkboxes marked \`[x]\` (MANDATORY)** 2. All acceptance criteria met 3. All tests passing 4. No compilation errors 5. No linting errors (or only acceptable warnings) 6. Build succeeds 7. Manual testing complete **⚠️ If any checkbox is \`[ ]\`, task is NOT complete regardless of other factors** ### Document Any Issues: If you encountered any problems or made significant decisions: 1. Note them in the task's Implementation Notes section 2. Update documentation if needed 3. Create follow-up tasks if necessary ### Next Steps: 1. Run task-checker tool to confirm task is complete 2. Check for newly unblocked tasks 3. Run task-orchestrator to identify next tasks ## EXECUTION SUMMARY **Task**: ${task_id} **Objective**: [Extract from task description] **Key Requirements**: [List from acceptance criteria] **Implementation Approach**: [Your planned approach] **Testing Strategy**: [How you'll verify correctness] **Risk Factors**: [Any concerns or blockers] Remember: Focus on completing this ONE task thoroughly before moving to the next. Quality over quantity. ## ⚠️ IMPORTANT: THIS IS A SPEC MCP TOOL INSTRUCTION ⚠️ **YOU ARE NOW READING INSTRUCTIONS FROM THE SPEC MCP \`task-executor\` TOOL** This tool has provided you with implementation guidance for Task ${task_id}. **What you MUST do now:** 1. **IMPLEMENT** the task following the phases above 2. **UPDATE** the task status in ${project_path}/.spec/specs/tasks.md file (NOT internal TODO list) 3. **CHECK** acceptance criteria boxes as you complete them 4. **VERIFY** with Spec MCP \`task-checker\` tool when done **Clarification:** - Task status is tracked in the tasks.md FILE (not VSCode's internal TODO system) - Update status by EDITING the tasks.md file directly - After implementation, CALL the Spec MCP \`task-checker\` tool to verify **DO NOT** just read these instructions - **ACTUALLY IMPLEMENT** the task now!`; return { content: [{ type: "text", text: prompt }] }; } );
- src/server.ts:631-873 (handler)The handler function constructs a detailed, phased prompt template providing step-by-step guidance for executing a specific task (reading tasks.md, implementing code/tests, updating status, verifying completion) and returns it as text content for the MCP client.async ({ task_id, project_path = '.', update_status = true }) => { const prompt = `# Task Execution Guide for ${task_id} ## SIMPLICITY PRINCIPLES 1. Keep outputs simple, clean, and straightforward. 2. Do not cut scope or functionality to be "simple". 3. Implement only what’s needed to satisfy acceptance criteria. 4. Prefer minimal steps and sections; avoid ceremony. 5. Reuse existing patterns; avoid new abstractions unless essential. 6. Avoid overengineering — choose the smallest design that works. 7. Be concise in wording, complete in coverage. 8. Iterate: ship minimal complete, then improve. ## ⚠️ CRITICAL: TASK COMPLETION RULES **DO NOT MARK TASK AS DONE UNLESS:** 1. ALL acceptance criteria checkboxes are marked \`[x]\` (not \`[ ]\`) 2. You have verified each checkbox is actually checked in tasks.md 3. Run task-checker tool FIRST to confirm all criteria are met 4. If ANY checkbox remains \`[ ]\`, keep status as "🟡 In Progress" 5. Count and report: "Checked X of Y acceptance criteria" ## PHASE 1: TASK RETRIEVAL 1. Read the tasks file: ${project_path}/.spec/specs/tasks.md 2. Locate Task ${task_id} and extract: - Current status (should be ⚪ Not Started or 🟡 In Progress) - Requirements and traceability links - Acceptance criteria (EARS format) - Implementation details - Dependencies - Testing requirements ## PHASE 2: PRE-EXECUTION CHECKS ### Dependency Verification Check that all tasks listed in "Blocked By" are marked as ✅ Done. If any dependencies are not complete, STOP and report the blockage. ### Context Gathering 1. Read steering documents if they exist: - ${project_path}/.spec/steering/tech.md for technology standards - ${project_path}/.spec/steering/structure.md for file organization 2. Read the tasks document: ${project_path}/.spec/specs/tasks.md - Use the Requirements section (R-X) and task details as the source of truth ${update_status ? ` ## PHASE 3: STATUS UPDATE - MARK AS IN PROGRESS Update the task status from ⚪ to 🟡: \`\`\` Edit ${project_path}/.spec/specs/tasks.md Change: **Status**: ⚪ Not Started To: **Status**: 🟡 In Progress \`\`\` ` : ''} ## PHASE 4: IMPLEMENTATION PLAN ### Files to Create From the "Files to Create" section of the task: 1. For each new file listed: - Create the file at the specified path - Implement according to the code patterns provided - Follow the project's coding conventions ### Files to Modify From the "Files to Modify" section: 1. For each existing file: - Read the current content first - Make the specified modifications - Preserve existing functionality - Follow existing code patterns ### Code Implementation Using the "Code Patterns and Examples" section: 1. Implement the interfaces/types exactly as specified 2. Follow the function signatures provided 3. Add proper error handling 4. Include necessary imports 5. Maintain consistency with existing code ## PHASE 5: ACCEPTANCE CRITERIA IMPLEMENTATION For each acceptance criterion in EARS format: - WHEN [condition] THEN THE SYSTEM SHALL [behavior] Ensure your implementation: 1. Handles the specified condition 2. Produces the expected behavior 3. Includes error cases mentioned 4. Meets performance requirements 5. Satisfies security requirements ### Implementation Checklist: □ All required files created □ All required files modified □ Each acceptance criterion has corresponding code □ Error handling implemented □ Edge cases covered □ Security considerations addressed □ Performance requirements met ## PHASE 6: TESTING IMPLEMENTATION ### Unit Tests From "Unit Tests" section: 1. Create test file at specified location 2. Implement tests for: - Happy path scenarios - Error handling - Edge cases - Validation logic - Mocked dependencies ### Integration Tests If specified: 1. Test API endpoints 2. Test database interactions 3. Test authentication/authorization 4. Test component communication ### Manual Testing Follow the "Manual Testing Checklist": 1. Verify each requirement is met 2. Test on different screen sizes (if UI) 3. Test keyboard navigation (if UI) 4. Test error scenarios 5. Verify loading states 6. Test with slow network 7. Check accessibility ## PHASE 7: CODE QUALITY CHECKS Use the documented commands from .spec/steering/tech.md to ensure quality. Specifically, look for the "Essential Commands" section (Type Check, Lint/Format, Build, Test). - If commands are documented: run them and fix issues found. - If not documented: skip running and add a TODO to update tech.md. Fix any issues before proceeding. ## PHASE 8: IMPLEMENTATION NOTES ### Key Considerations: - **Security**: Implement input validation, sanitization, authorization - **Performance**: Add caching, optimize queries, implement loading states - **Accessibility**: Include ARIA labels, keyboard navigation, screen reader support - **Mobile**: Handle touch interactions, responsive breakpoints, gestures - **Error Handling**: User-friendly messages, fallback UI, retry logic - **Logging**: Add debug info, error tracking, user action tracking ### Best Practices: 1. Follow existing code patterns in the project 2. Use existing utilities and helper functions 3. Maintain consistent naming conventions 4. Follow the project's typing conventions or type-safety practices 5. Keep functions small and focused 6. Write self-documenting code ${update_status ? ` ## PHASE 9: TASK COMPLETION Once implementation is complete and tested: **⚠️ CRITICAL VERIFICATION BEFORE MARKING DONE:** 1. First, verify ALL checkboxes are checked: - Count total acceptance criteria - Ensure EVERY single one shows \`- [x]\` (not \`- [ ]\`) - If ANY remain unchecked, DO NOT mark as Done 2. Update task status to Done ONLY if ALL checkboxes are \`[x]\`: \`\`\` Edit ${project_path}/.spec/specs/tasks.md Change: **Status**: 🟡 In Progress To: **Status**: ✅ Done \`\`\` 3. Check all acceptance criteria checkboxes: \`\`\` Change: - [ ] [Criterion] To: - [x] [Criterion] \`\`\` **RULE: Status can be "Done" ONLY when 100% of checkboxes show \`[x]\`** ` : ''} ## PHASE 10: POST-EXECUTION ### Verify Completion: 1. **ALL acceptance criteria checkboxes marked \`[x]\` (MANDATORY)** 2. All acceptance criteria met 3. All tests passing 4. No compilation errors 5. No linting errors (or only acceptable warnings) 6. Build succeeds 7. Manual testing complete **⚠️ If any checkbox is \`[ ]\`, task is NOT complete regardless of other factors** ### Document Any Issues: If you encountered any problems or made significant decisions: 1. Note them in the task's Implementation Notes section 2. Update documentation if needed 3. Create follow-up tasks if necessary ### Next Steps: 1. Run task-checker tool to confirm task is complete 2. Check for newly unblocked tasks 3. Run task-orchestrator to identify next tasks ## EXECUTION SUMMARY **Task**: ${task_id} **Objective**: [Extract from task description] **Key Requirements**: [List from acceptance criteria] **Implementation Approach**: [Your planned approach] **Testing Strategy**: [How you'll verify correctness] **Risk Factors**: [Any concerns or blockers] Remember: Focus on completing this ONE task thoroughly before moving to the next. Quality over quantity. ## ⚠️ IMPORTANT: THIS IS A SPEC MCP TOOL INSTRUCTION ⚠️ **YOU ARE NOW READING INSTRUCTIONS FROM THE SPEC MCP \`task-executor\` TOOL** This tool has provided you with implementation guidance for Task ${task_id}. **What you MUST do now:** 1. **IMPLEMENT** the task following the phases above 2. **UPDATE** the task status in ${project_path}/.spec/specs/tasks.md file (NOT internal TODO list) 3. **CHECK** acceptance criteria boxes as you complete them 4. **VERIFY** with Spec MCP \`task-checker\` tool when done **Clarification:** - Task status is tracked in the tasks.md FILE (not VSCode's internal TODO system) - Update status by EDITING the tasks.md file directly - After implementation, CALL the Spec MCP \`task-checker\` tool to verify **DO NOT** just read these instructions - **ACTUALLY IMPLEMENT** the task now!`; return { content: [{ type: "text", text: prompt }] }; }
- src/server.ts:625-629 (schema)Zod-based input schema defining parameters for the task-executor tool: required task_id, optional project_path and update_status.inputSchema: { task_id: z.string().describe("Task ID to execute (e.g., T-1, T-2)"), project_path: z.string().optional().describe("Path to the project directory (defaults to current directory)"), update_status: z.boolean().optional().describe("Whether to include instructions for updating task status (default: true)") }