Skip to main content
Glama

generate-tasks

Create implementation plans and task breakdowns from feature requests by analyzing codebases. Generates structured documentation with requirements, numbered tasks, dependencies, and testing guidance for systematic development workflows.

Instructions

Generate tasks.md (implementation plan & task breakdown) from your request and codebase. Writes .spec/specs/tasks.md with overview, numbered requirements using EARS, implementable tasks (T-1…), dependencies/phases, and testing guidance. Use for “task breakdown”, “create tasks.md”, “implementation plan”, or “roadmap”.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_pathNoPath to the project directory (defaults to current directory)
user_requestYesFeature request or requirement to plan for

Implementation Reference

  • The handler function that implements the 'generate-tasks' tool. It generates a detailed markdown prompt instructing the AI agent to create a '.spec/specs/tasks.md' file. The prompt includes structure for overview, requirements (EARS format), implementation tasks (T-1 etc.), phases, dependencies (Mermaid graph), risk assessment, and execution guidelines. It emphasizes using steering documents (product.md, tech.md, structure.md) if available for project-specific details like tech stack, structure, and terminology.
    async ({ user_request, project_path = '.' }) => { const prompt = `# ⚠️ IMPORTANT: YOU MUST EXECUTE THE FOLLOWING ACTIONS This MCP tool provides instructions that YOU (the AI assistant) must execute using your available file system tools. **DO NOT** just report success — you MUST actually create the files using Write or MultiEdit tools. # Generate Plan and Tasks for ${project_path} ## SIMPLICITY PRINCIPLES 1. Keep outputs simple, clean, and straightforward. 2. Do not cut scope or functionality to be "simple". 3. Implement only what’s needed to satisfy acceptance criteria. 4. Prefer minimal steps and sections; avoid ceremony. 5. Reuse existing patterns; avoid new abstractions unless essential. 6. Avoid overengineering — choose the smallest design that works. 7. Be concise in wording, complete in coverage. 8. Iterate: ship minimal complete, then improve. ## What This Tool Does - Reads steering docs if present and analyzes the codebase context - Extracts requirements from the user request and evidence from code - Produces ONE file: ${project_path}/.spec/specs/tasks.md containing: - Overview and Requirements (with EARS acceptance criteria and R-IDs) - Implementation tasks T-1... with traceability, code examples, file references - Phases, dependencies, testing requirements, and risk assessment ## Prerequisite: Steering Docs Usage 1. **USE Read tool** to check if these files exist and read them if present: - ${project_path}/.spec/steering/product.md - ${project_path}/.spec/steering/tech.md - ${project_path}/.spec/steering/structure.md 2. If found, TREAT THEM AS AUTHORITATIVE and APPLY them when generating tasks: - From tech.md: determine language/framework, correct file extensions, and "Essential Commands" (install, build, test, lint, type-check). Use these in tasks and testing sections. - From structure.md: use actual directories, file naming conventions, and test locations when listing "Files to Modify/Create" and when referencing paths. - From product.md: align terminology, user roles, and feature names in Requirements and task titles. 3. If any file is missing, proceed but mark unknowns as [NEEDED] and prefer evidence from the codebase. ## 🔴 CRITICAL: CREATE THE FILE — NOT JUST REPORT SUCCESS 1) Create directory: mkdir -p ${project_path}/.spec/specs/ 2) Generate tasks.md using the template and guardrails below 3) Save to: ${project_path}/.spec/specs/tasks.md 4) Verify the file exists after creation (Read tool) # tasks.md ## 0. Overview - Purpose: Summarize the feature in 1–2 sentences - Scope: In/out for this iteration - Assumptions: Constraints that influence design ## 1. Requirements (with EARS) Define numbered requirements and acceptance criteria directly here. - Formatting: “As a [role], I want [goal] so that [benefit]” - Use EARS for acceptance criteria: WHEN [condition] THEN THE SYSTEM SHALL [expected behavior] - Evidence tags: mark details as [EXISTS], [EXAMPLE], or [NEEDED] - Evidence: reference files or snippets when available; if not available, mark as [NEEDED]. - Invariants: constraints to preserve (e.g., existing behavior, public API contracts) - Out-of-scope: list what will not change to prevent scope creep ### R-1: <Title from request/context> - User Story: As a <role>, I want <change>, so that <benefit>. - Files Affected: <List evidenced paths if available> - Acceptance Criteria: - WHEN <condition with actual names> THEN THE SYSTEM SHALL <behavior> - WHEN <error/edge case> THEN THE SYSTEM SHALL <behavior> ### R-2: <Next requirement> - User Story: ... - Files Affected: ... - Acceptance Criteria: ... #### Edge Cases and Errors - [Edge case] → Expected behavior #### Non-Functional Requirements - Performance, security, accessibility, observability ## 2. Implementation Tasks Tasks derived from the requirements and code evidence with traceability, evidence, and tests. ## Task Structure Template ### Task T-1: [Task Title] **Status**: ⚪ Not Started **Evidence**: [EXISTS/EXAMPLE/NEEDED] — Cite sources **Requirement Traceability**: Links to R-[X] from the Requirements section #### Summary - What this task accomplishes #### Files to Modify - a/b/file1.<ext> — Implement X (use appropriate extension per tech.md) #### Files to Create - src/new/Feature.<ext> — Per design Section 2.x (use appropriate extension per tech.md) #### Code Patterns and Examples \`\`\` // Copy verbatim examples from Section 2.7 [EXAMPLE] // Use the appropriate language fence (e.g., \`\`\`java, \`\`\`python) based on tech.md \`\`\` #### Acceptance Criteria (EARS) - [ ] WHEN <condition> THEN THE SYSTEM SHALL <behavior> - [ ] WHEN <error/edge case> THEN THE SYSTEM SHALL <behavior> #### Testing - Unit: location and naming per project conventions (see tech.md/structure.md); cover happy path, errors, edges - Integration: API/DB or component interaction tests per project conventions - E2E: user journey tests per project conventions (if applicable) #### Notes - Assumptions, follow-ups, clarifications --- ## Task Breakdown (Generated) - List all tasks T-1, T-2, ... with structure above ## Phases and Dependencies - Phase 1: Foundation (T-1, T-2, T-3) - Phase 2: Core (T-4, T-5, T-6) - Phase 3: Integration (T-7, T-8, T-9) - Phase 4: Quality/Launch (T-10, T-11, T-12) ## Dependency Graph \`\`\`mermaid graph TD T1[T-1: Setup] --> T4[T-4: Core Feature] T2[T-2: Database] --> T4 T2 --> T5[T-5: Business Logic] T3[T-3: API Structure] --> T5 T4 --> T6[T-6: UI Components] T4 --> T7[T-7: Integration] T5 --> T7 T6 --> T8[T-8: Error Handling] T7 --> T8 T7 --> T9[T-9: Performance] T8 --> T10[T-10: Testing] T9 --> T10 T10 --> T11[T-11: Documentation] T11 --> T12[T-12: Monitoring] \`\`\` ## Risk Assessment - High risk: <task> — mitigation - Critical path: T-1 → T-2 → T-4 → T-7 → T-8 → T-10 → T-11 ## Execution Guidelines 1) One task at a time; update status ⚪→🟡→✅ 2) Verify all EARS criteria before Done 3) Tests pass; docs updated ## EXECUTION STEPS 1) **USE Read tool** to check and read: ${project_path}/.spec/steering/product.md, tech.md, structure.md. If present, APPLY them to select correct file extensions, paths, commands, and terminology. If missing, continue and mark gaps as [NEEDED]. Also read relevant project files (package.json, README.md, etc.). 2) Extract requirements and evidence from user request and codebase 3) Map [EXISTS]/[EXAMPLE]/[NEEDED] to implementation vs. research tasks 4) mkdir -p ${project_path}/.spec/specs/ 5) Write tasks to ${project_path}/.spec/specs/tasks.md 6) Verify the file exists (Read tool) ## SUCCESS CRITERIA ✅ .spec/specs/tasks.md physically created and verified ✅ Requirements section with EARS acceptance criteria ✅ Clear traceability (T-X → R-Y) ✅ References to files/snippets when available (mark [NEEDED] when unavailable) ✅ Testing requirements and dependency graph included `; return { content: [{ type: "text", text: prompt }] }; }
  • The input schema for the 'generate-tasks' tool, defining parameters: user_request (required string) and project_path (optional string, defaults to current dir). Uses Zod for validation.
    inputSchema: { user_request: z.string().describe("Feature request or requirement to plan for"), project_path: z.string().optional().describe("Path to the project directory (defaults to current directory)") }
  • src/server.ts:233-417 (registration)
    The registration of the 'generate-tasks' tool on the MCP server, including title, description, inputSchema, and handler function.
    server.registerTool( 'generate-tasks', { title: 'Spec MCP: Generate tasks.md (Plan & Task Breakdown)', description: 'Generate tasks.md (implementation plan & task breakdown) from your request and codebase. Writes `.spec/specs/tasks.md` with overview, numbered requirements using EARS, implementable tasks (T-1…), dependencies/phases, and testing guidance. Use for “task breakdown”, “create tasks.md”, “implementation plan”, or “roadmap”.', inputSchema: { user_request: z.string().describe("Feature request or requirement to plan for"), project_path: z.string().optional().describe("Path to the project directory (defaults to current directory)") } }, async ({ user_request, project_path = '.' }) => { const prompt = `# ⚠️ IMPORTANT: YOU MUST EXECUTE THE FOLLOWING ACTIONS This MCP tool provides instructions that YOU (the AI assistant) must execute using your available file system tools. **DO NOT** just report success — you MUST actually create the files using Write or MultiEdit tools. # Generate Plan and Tasks for ${project_path} ## SIMPLICITY PRINCIPLES 1. Keep outputs simple, clean, and straightforward. 2. Do not cut scope or functionality to be "simple". 3. Implement only what’s needed to satisfy acceptance criteria. 4. Prefer minimal steps and sections; avoid ceremony. 5. Reuse existing patterns; avoid new abstractions unless essential. 6. Avoid overengineering — choose the smallest design that works. 7. Be concise in wording, complete in coverage. 8. Iterate: ship minimal complete, then improve. ## What This Tool Does - Reads steering docs if present and analyzes the codebase context - Extracts requirements from the user request and evidence from code - Produces ONE file: ${project_path}/.spec/specs/tasks.md containing: - Overview and Requirements (with EARS acceptance criteria and R-IDs) - Implementation tasks T-1... with traceability, code examples, file references - Phases, dependencies, testing requirements, and risk assessment ## Prerequisite: Steering Docs Usage 1. **USE Read tool** to check if these files exist and read them if present: - ${project_path}/.spec/steering/product.md - ${project_path}/.spec/steering/tech.md - ${project_path}/.spec/steering/structure.md 2. If found, TREAT THEM AS AUTHORITATIVE and APPLY them when generating tasks: - From tech.md: determine language/framework, correct file extensions, and "Essential Commands" (install, build, test, lint, type-check). Use these in tasks and testing sections. - From structure.md: use actual directories, file naming conventions, and test locations when listing "Files to Modify/Create" and when referencing paths. - From product.md: align terminology, user roles, and feature names in Requirements and task titles. 3. If any file is missing, proceed but mark unknowns as [NEEDED] and prefer evidence from the codebase. ## 🔴 CRITICAL: CREATE THE FILE — NOT JUST REPORT SUCCESS 1) Create directory: mkdir -p ${project_path}/.spec/specs/ 2) Generate tasks.md using the template and guardrails below 3) Save to: ${project_path}/.spec/specs/tasks.md 4) Verify the file exists after creation (Read tool) # tasks.md ## 0. Overview - Purpose: Summarize the feature in 1–2 sentences - Scope: In/out for this iteration - Assumptions: Constraints that influence design ## 1. Requirements (with EARS) Define numbered requirements and acceptance criteria directly here. - Formatting: “As a [role], I want [goal] so that [benefit]” - Use EARS for acceptance criteria: WHEN [condition] THEN THE SYSTEM SHALL [expected behavior] - Evidence tags: mark details as [EXISTS], [EXAMPLE], or [NEEDED] - Evidence: reference files or snippets when available; if not available, mark as [NEEDED]. - Invariants: constraints to preserve (e.g., existing behavior, public API contracts) - Out-of-scope: list what will not change to prevent scope creep ### R-1: <Title from request/context> - User Story: As a <role>, I want <change>, so that <benefit>. - Files Affected: <List evidenced paths if available> - Acceptance Criteria: - WHEN <condition with actual names> THEN THE SYSTEM SHALL <behavior> - WHEN <error/edge case> THEN THE SYSTEM SHALL <behavior> ### R-2: <Next requirement> - User Story: ... - Files Affected: ... - Acceptance Criteria: ... #### Edge Cases and Errors - [Edge case] → Expected behavior #### Non-Functional Requirements - Performance, security, accessibility, observability ## 2. Implementation Tasks Tasks derived from the requirements and code evidence with traceability, evidence, and tests. ## Task Structure Template ### Task T-1: [Task Title] **Status**: ⚪ Not Started **Evidence**: [EXISTS/EXAMPLE/NEEDED] — Cite sources **Requirement Traceability**: Links to R-[X] from the Requirements section #### Summary - What this task accomplishes #### Files to Modify - a/b/file1.<ext> — Implement X (use appropriate extension per tech.md) #### Files to Create - src/new/Feature.<ext> — Per design Section 2.x (use appropriate extension per tech.md) #### Code Patterns and Examples \`\`\` // Copy verbatim examples from Section 2.7 [EXAMPLE] // Use the appropriate language fence (e.g., \`\`\`java, \`\`\`python) based on tech.md \`\`\` #### Acceptance Criteria (EARS) - [ ] WHEN <condition> THEN THE SYSTEM SHALL <behavior> - [ ] WHEN <error/edge case> THEN THE SYSTEM SHALL <behavior> #### Testing - Unit: location and naming per project conventions (see tech.md/structure.md); cover happy path, errors, edges - Integration: API/DB or component interaction tests per project conventions - E2E: user journey tests per project conventions (if applicable) #### Notes - Assumptions, follow-ups, clarifications --- ## Task Breakdown (Generated) - List all tasks T-1, T-2, ... with structure above ## Phases and Dependencies - Phase 1: Foundation (T-1, T-2, T-3) - Phase 2: Core (T-4, T-5, T-6) - Phase 3: Integration (T-7, T-8, T-9) - Phase 4: Quality/Launch (T-10, T-11, T-12) ## Dependency Graph \`\`\`mermaid graph TD T1[T-1: Setup] --> T4[T-4: Core Feature] T2[T-2: Database] --> T4 T2 --> T5[T-5: Business Logic] T3[T-3: API Structure] --> T5 T4 --> T6[T-6: UI Components] T4 --> T7[T-7: Integration] T5 --> T7 T6 --> T8[T-8: Error Handling] T7 --> T8 T7 --> T9[T-9: Performance] T8 --> T10[T-10: Testing] T9 --> T10 T10 --> T11[T-11: Documentation] T11 --> T12[T-12: Monitoring] \`\`\` ## Risk Assessment - High risk: <task> — mitigation - Critical path: T-1 → T-2 → T-4 → T-7 → T-8 → T-10 → T-11 ## Execution Guidelines 1) One task at a time; update status ⚪→🟡→✅ 2) Verify all EARS criteria before Done 3) Tests pass; docs updated ## EXECUTION STEPS 1) **USE Read tool** to check and read: ${project_path}/.spec/steering/product.md, tech.md, structure.md. If present, APPLY them to select correct file extensions, paths, commands, and terminology. If missing, continue and mark gaps as [NEEDED]. Also read relevant project files (package.json, README.md, etc.). 2) Extract requirements and evidence from user request and codebase 3) Map [EXISTS]/[EXAMPLE]/[NEEDED] to implementation vs. research tasks 4) mkdir -p ${project_path}/.spec/specs/ 5) Write tasks to ${project_path}/.spec/specs/tasks.md 6) Verify the file exists (Read tool) ## SUCCESS CRITERIA ✅ .spec/specs/tasks.md physically created and verified ✅ Requirements section with EARS acceptance criteria ✅ Clear traceability (T-X → R-Y) ✅ References to files/snippets when available (mark [NEEDED] when unavailable) ✅ Testing requirements and dependency graph included `; return { content: [{ type: "text", text: prompt }] }; } );

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/karol-f/spec-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server