Skip to main content
Glama

Spec MCP: Generate tasks.md (Plan & Task Breakdown)

generate-tasks

Create implementation plans and task breakdowns from feature requests by analyzing codebases. Generates structured documentation with requirements, numbered tasks, dependencies, and testing guidance for systematic development workflows.

Instructions

Generate tasks.md (implementation plan & task breakdown) from your request and codebase. Writes .spec/specs/tasks.md with overview, numbered requirements using EARS, implementable tasks (T-1…), dependencies/phases, and testing guidance. Use for “task breakdown”, “create tasks.md”, “implementation plan”, or “roadmap”.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
project_pathNoPath to the project directory (defaults to current directory)
user_requestYesFeature request or requirement to plan for

Implementation Reference

  • The handler function that implements the 'generate-tasks' tool. It generates a detailed markdown prompt instructing the AI agent to create a '.spec/specs/tasks.md' file. The prompt includes structure for overview, requirements (EARS format), implementation tasks (T-1 etc.), phases, dependencies (Mermaid graph), risk assessment, and execution guidelines. It emphasizes using steering documents (product.md, tech.md, structure.md) if available for project-specific details like tech stack, structure, and terminology.
      async ({ user_request, project_path = '.' }) => {
        const prompt = `# ⚠️ IMPORTANT: YOU MUST EXECUTE THE FOLLOWING ACTIONS
    
    This MCP tool provides instructions that YOU (the AI assistant) must execute using your available file system tools.
    **DO NOT** just report success — you MUST actually create the files using Write or MultiEdit tools.
    
    # Generate Plan and Tasks for ${project_path}
    
    ## SIMPLICITY PRINCIPLES
    1. Keep outputs simple, clean, and straightforward.
    2. Do not cut scope or functionality to be "simple".
    3. Implement only what’s needed to satisfy acceptance criteria.
    4. Prefer minimal steps and sections; avoid ceremony.
    5. Reuse existing patterns; avoid new abstractions unless essential.
    6. Avoid overengineering — choose the smallest design that works.
    7. Be concise in wording, complete in coverage.
    8. Iterate: ship minimal complete, then improve.
    
    ## What This Tool Does
    - Reads steering docs if present and analyzes the codebase context
    - Extracts requirements from the user request and evidence from code
    - Produces ONE file: ${project_path}/.spec/specs/tasks.md containing:
      - Overview and Requirements (with EARS acceptance criteria and R-IDs)
      - Implementation tasks T-1... with traceability, code examples, file references
      - Phases, dependencies, testing requirements, and risk assessment
    
    ## Prerequisite: Steering Docs Usage
    1. **USE Read tool** to check if these files exist and read them if present:
       - ${project_path}/.spec/steering/product.md
       - ${project_path}/.spec/steering/tech.md
       - ${project_path}/.spec/steering/structure.md
    2. If found, TREAT THEM AS AUTHORITATIVE and APPLY them when generating tasks:
       - From tech.md: determine language/framework, correct file extensions, and "Essential Commands" (install, build, test, lint, type-check). Use these in tasks and testing sections.
       - From structure.md: use actual directories, file naming conventions, and test locations when listing "Files to Modify/Create" and when referencing paths.
       - From product.md: align terminology, user roles, and feature names in Requirements and task titles.
    3. If any file is missing, proceed but mark unknowns as [NEEDED] and prefer evidence from the codebase.
    
    
    ## 🔴 CRITICAL: CREATE THE FILE — NOT JUST REPORT SUCCESS
    1) Create directory: mkdir -p ${project_path}/.spec/specs/
    2) Generate tasks.md using the template and guardrails below
    3) Save to: ${project_path}/.spec/specs/tasks.md
    4) Verify the file exists after creation (Read tool)
    
    # tasks.md
    
    ## 0. Overview
    - Purpose: Summarize the feature in 1–2 sentences
    - Scope: In/out for this iteration
    - Assumptions: Constraints that influence design
    
    ## 1. Requirements (with EARS)
    Define numbered requirements and acceptance criteria directly here.
    - Formatting: “As a [role], I want [goal] so that [benefit]”
    - Use EARS for acceptance criteria: WHEN [condition] THEN THE SYSTEM SHALL [expected behavior]
    - Evidence tags: mark details as [EXISTS], [EXAMPLE], or [NEEDED]
      - Evidence: reference files or snippets when available; if not available, mark as [NEEDED].
    - Invariants: constraints to preserve (e.g., existing behavior, public API contracts)
    - Out-of-scope: list what will not change to prevent scope creep
    
    ### R-1: <Title from request/context>
    - User Story: As a <role>, I want <change>, so that <benefit>.
    - Files Affected: <List evidenced paths if available>
    - Acceptance Criteria:
      - WHEN <condition with actual names> THEN THE SYSTEM SHALL <behavior>
      - WHEN <error/edge case> THEN THE SYSTEM SHALL <behavior>
    
    ### R-2: <Next requirement>
    - User Story: ...
    - Files Affected: ...
    - Acceptance Criteria: ...
    
    #### Edge Cases and Errors
    - [Edge case] → Expected behavior
    
    #### Non-Functional Requirements
    - Performance, security, accessibility, observability
    
    ## 2. Implementation Tasks
    Tasks derived from the requirements and code evidence with traceability, evidence, and tests.
    
    ## Task Structure Template
    
    ### Task T-1: [Task Title]
    **Status**: ⚪ Not Started
    **Evidence**: [EXISTS/EXAMPLE/NEEDED] — Cite sources
    **Requirement Traceability**: Links to R-[X] from the Requirements section
    
    #### Summary
    - What this task accomplishes
    
    #### Files to Modify
    - a/b/file1.<ext> — Implement X (use appropriate extension per tech.md)
    
    #### Files to Create
    - src/new/Feature.<ext> — Per design Section 2.x (use appropriate extension per tech.md)
    
    #### Code Patterns and Examples
    \`\`\`
    // Copy verbatim examples from Section 2.7 [EXAMPLE]
    // Use the appropriate language fence (e.g., \`\`\`java, \`\`\`python) based on tech.md
    \`\`\`
    
    #### Acceptance Criteria (EARS)
    - [ ] WHEN <condition> THEN THE SYSTEM SHALL <behavior>
    - [ ] WHEN <error/edge case> THEN THE SYSTEM SHALL <behavior>
    
    #### Testing
    - Unit: location and naming per project conventions (see tech.md/structure.md); cover happy path, errors, edges
    - Integration: API/DB or component interaction tests per project conventions
    - E2E: user journey tests per project conventions (if applicable)
    
    #### Notes
    - Assumptions, follow-ups, clarifications
    
    ---
    
    ## Task Breakdown (Generated)
    - List all tasks T-1, T-2, ... with structure above
    
    ## Phases and Dependencies
    - Phase 1: Foundation (T-1, T-2, T-3)
    - Phase 2: Core (T-4, T-5, T-6)
    - Phase 3: Integration (T-7, T-8, T-9)
    - Phase 4: Quality/Launch (T-10, T-11, T-12)
    
    ## Dependency Graph
    \`\`\`mermaid
    graph TD
        T1[T-1: Setup] --> T4[T-4: Core Feature]
        T2[T-2: Database] --> T4
        T2 --> T5[T-5: Business Logic]
        T3[T-3: API Structure] --> T5
        T4 --> T6[T-6: UI Components]
        T4 --> T7[T-7: Integration]
        T5 --> T7
        T6 --> T8[T-8: Error Handling]
        T7 --> T8
        T7 --> T9[T-9: Performance]
        T8 --> T10[T-10: Testing]
        T9 --> T10
        T10 --> T11[T-11: Documentation]
        T11 --> T12[T-12: Monitoring]
    \`\`\`
    
    ## Risk Assessment
    - High risk: <task> — mitigation
    - Critical path: T-1 → T-2 → T-4 → T-7 → T-8 → T-10 → T-11
    
    ## Execution Guidelines
    1) One task at a time; update status ⚪→🟡→✅
    2) Verify all EARS criteria before Done
    3) Tests pass; docs updated
    
    ## EXECUTION STEPS
    1) **USE Read tool** to check and read: ${project_path}/.spec/steering/product.md, tech.md, structure.md. If present, APPLY them to select correct file extensions, paths, commands, and terminology. If missing, continue and mark gaps as [NEEDED]. Also read relevant project files (package.json, README.md, etc.).
    2) Extract requirements and evidence from user request and codebase
    3) Map [EXISTS]/[EXAMPLE]/[NEEDED] to implementation vs. research tasks
    4) mkdir -p ${project_path}/.spec/specs/
    5) Write tasks to ${project_path}/.spec/specs/tasks.md
    6) Verify the file exists (Read tool)
    
    ## SUCCESS CRITERIA
    ✅ .spec/specs/tasks.md physically created and verified
    ✅ Requirements section with EARS acceptance criteria
    ✅ Clear traceability (T-X → R-Y)
    ✅ References to files/snippets when available (mark [NEEDED] when unavailable)
    ✅ Testing requirements and dependency graph included
    `;
    
        return {
          content: [{ type: "text", text: prompt }]
        };
      }
  • The input schema for the 'generate-tasks' tool, defining parameters: user_request (required string) and project_path (optional string, defaults to current dir). Uses Zod for validation.
    inputSchema: {
      user_request: z.string().describe("Feature request or requirement to plan for"),
      project_path: z.string().optional().describe("Path to the project directory (defaults to current directory)")
    }
  • src/server.ts:233-417 (registration)
    The registration of the 'generate-tasks' tool on the MCP server, including title, description, inputSchema, and handler function.
    server.registerTool(
      'generate-tasks',
      {
        title: 'Spec MCP: Generate tasks.md (Plan & Task Breakdown)',
        description: 'Generate tasks.md (implementation plan & task breakdown) from your request and codebase. Writes `.spec/specs/tasks.md` with overview, numbered requirements using EARS, implementable tasks (T-1…), dependencies/phases, and testing guidance. Use for “task breakdown”, “create tasks.md”, “implementation plan”, or “roadmap”.',
        inputSchema: {
          user_request: z.string().describe("Feature request or requirement to plan for"),
          project_path: z.string().optional().describe("Path to the project directory (defaults to current directory)")
        }
      },
      async ({ user_request, project_path = '.' }) => {
        const prompt = `# ⚠️ IMPORTANT: YOU MUST EXECUTE THE FOLLOWING ACTIONS
    
    This MCP tool provides instructions that YOU (the AI assistant) must execute using your available file system tools.
    **DO NOT** just report success — you MUST actually create the files using Write or MultiEdit tools.
    
    # Generate Plan and Tasks for ${project_path}
    
    ## SIMPLICITY PRINCIPLES
    1. Keep outputs simple, clean, and straightforward.
    2. Do not cut scope or functionality to be "simple".
    3. Implement only what’s needed to satisfy acceptance criteria.
    4. Prefer minimal steps and sections; avoid ceremony.
    5. Reuse existing patterns; avoid new abstractions unless essential.
    6. Avoid overengineering — choose the smallest design that works.
    7. Be concise in wording, complete in coverage.
    8. Iterate: ship minimal complete, then improve.
    
    ## What This Tool Does
    - Reads steering docs if present and analyzes the codebase context
    - Extracts requirements from the user request and evidence from code
    - Produces ONE file: ${project_path}/.spec/specs/tasks.md containing:
      - Overview and Requirements (with EARS acceptance criteria and R-IDs)
      - Implementation tasks T-1... with traceability, code examples, file references
      - Phases, dependencies, testing requirements, and risk assessment
    
    ## Prerequisite: Steering Docs Usage
    1. **USE Read tool** to check if these files exist and read them if present:
       - ${project_path}/.spec/steering/product.md
       - ${project_path}/.spec/steering/tech.md
       - ${project_path}/.spec/steering/structure.md
    2. If found, TREAT THEM AS AUTHORITATIVE and APPLY them when generating tasks:
       - From tech.md: determine language/framework, correct file extensions, and "Essential Commands" (install, build, test, lint, type-check). Use these in tasks and testing sections.
       - From structure.md: use actual directories, file naming conventions, and test locations when listing "Files to Modify/Create" and when referencing paths.
       - From product.md: align terminology, user roles, and feature names in Requirements and task titles.
    3. If any file is missing, proceed but mark unknowns as [NEEDED] and prefer evidence from the codebase.
    
    
    ## 🔴 CRITICAL: CREATE THE FILE — NOT JUST REPORT SUCCESS
    1) Create directory: mkdir -p ${project_path}/.spec/specs/
    2) Generate tasks.md using the template and guardrails below
    3) Save to: ${project_path}/.spec/specs/tasks.md
    4) Verify the file exists after creation (Read tool)
    
    # tasks.md
    
    ## 0. Overview
    - Purpose: Summarize the feature in 1–2 sentences
    - Scope: In/out for this iteration
    - Assumptions: Constraints that influence design
    
    ## 1. Requirements (with EARS)
    Define numbered requirements and acceptance criteria directly here.
    - Formatting: “As a [role], I want [goal] so that [benefit]”
    - Use EARS for acceptance criteria: WHEN [condition] THEN THE SYSTEM SHALL [expected behavior]
    - Evidence tags: mark details as [EXISTS], [EXAMPLE], or [NEEDED]
      - Evidence: reference files or snippets when available; if not available, mark as [NEEDED].
    - Invariants: constraints to preserve (e.g., existing behavior, public API contracts)
    - Out-of-scope: list what will not change to prevent scope creep
    
    ### R-1: <Title from request/context>
    - User Story: As a <role>, I want <change>, so that <benefit>.
    - Files Affected: <List evidenced paths if available>
    - Acceptance Criteria:
      - WHEN <condition with actual names> THEN THE SYSTEM SHALL <behavior>
      - WHEN <error/edge case> THEN THE SYSTEM SHALL <behavior>
    
    ### R-2: <Next requirement>
    - User Story: ...
    - Files Affected: ...
    - Acceptance Criteria: ...
    
    #### Edge Cases and Errors
    - [Edge case] → Expected behavior
    
    #### Non-Functional Requirements
    - Performance, security, accessibility, observability
    
    ## 2. Implementation Tasks
    Tasks derived from the requirements and code evidence with traceability, evidence, and tests.
    
    ## Task Structure Template
    
    ### Task T-1: [Task Title]
    **Status**: ⚪ Not Started
    **Evidence**: [EXISTS/EXAMPLE/NEEDED] — Cite sources
    **Requirement Traceability**: Links to R-[X] from the Requirements section
    
    #### Summary
    - What this task accomplishes
    
    #### Files to Modify
    - a/b/file1.<ext> — Implement X (use appropriate extension per tech.md)
    
    #### Files to Create
    - src/new/Feature.<ext> — Per design Section 2.x (use appropriate extension per tech.md)
    
    #### Code Patterns and Examples
    \`\`\`
    // Copy verbatim examples from Section 2.7 [EXAMPLE]
    // Use the appropriate language fence (e.g., \`\`\`java, \`\`\`python) based on tech.md
    \`\`\`
    
    #### Acceptance Criteria (EARS)
    - [ ] WHEN <condition> THEN THE SYSTEM SHALL <behavior>
    - [ ] WHEN <error/edge case> THEN THE SYSTEM SHALL <behavior>
    
    #### Testing
    - Unit: location and naming per project conventions (see tech.md/structure.md); cover happy path, errors, edges
    - Integration: API/DB or component interaction tests per project conventions
    - E2E: user journey tests per project conventions (if applicable)
    
    #### Notes
    - Assumptions, follow-ups, clarifications
    
    ---
    
    ## Task Breakdown (Generated)
    - List all tasks T-1, T-2, ... with structure above
    
    ## Phases and Dependencies
    - Phase 1: Foundation (T-1, T-2, T-3)
    - Phase 2: Core (T-4, T-5, T-6)
    - Phase 3: Integration (T-7, T-8, T-9)
    - Phase 4: Quality/Launch (T-10, T-11, T-12)
    
    ## Dependency Graph
    \`\`\`mermaid
    graph TD
        T1[T-1: Setup] --> T4[T-4: Core Feature]
        T2[T-2: Database] --> T4
        T2 --> T5[T-5: Business Logic]
        T3[T-3: API Structure] --> T5
        T4 --> T6[T-6: UI Components]
        T4 --> T7[T-7: Integration]
        T5 --> T7
        T6 --> T8[T-8: Error Handling]
        T7 --> T8
        T7 --> T9[T-9: Performance]
        T8 --> T10[T-10: Testing]
        T9 --> T10
        T10 --> T11[T-11: Documentation]
        T11 --> T12[T-12: Monitoring]
    \`\`\`
    
    ## Risk Assessment
    - High risk: <task> — mitigation
    - Critical path: T-1 → T-2 → T-4 → T-7 → T-8 → T-10 → T-11
    
    ## Execution Guidelines
    1) One task at a time; update status ⚪→🟡→✅
    2) Verify all EARS criteria before Done
    3) Tests pass; docs updated
    
    ## EXECUTION STEPS
    1) **USE Read tool** to check and read: ${project_path}/.spec/steering/product.md, tech.md, structure.md. If present, APPLY them to select correct file extensions, paths, commands, and terminology. If missing, continue and mark gaps as [NEEDED]. Also read relevant project files (package.json, README.md, etc.).
    2) Extract requirements and evidence from user request and codebase
    3) Map [EXISTS]/[EXAMPLE]/[NEEDED] to implementation vs. research tasks
    4) mkdir -p ${project_path}/.spec/specs/
    5) Write tasks to ${project_path}/.spec/specs/tasks.md
    6) Verify the file exists (Read tool)
    
    ## SUCCESS CRITERIA
    ✅ .spec/specs/tasks.md physically created and verified
    ✅ Requirements section with EARS acceptance criteria
    ✅ Clear traceability (T-X → R-Y)
    ✅ References to files/snippets when available (mark [NEEDED] when unavailable)
    ✅ Testing requirements and dependency graph included
    `;
    
        return {
          content: [{ type: "text", text: prompt }]
        };
      }
    );
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses key behavioral traits: it writes a file ('.spec/specs/tasks.md'), generates structured content (overview, requirements, tasks, dependencies, testing guidance), and processes user requests and codebase. However, it doesn't mention potential side effects (e.g., file overwriting), error handling, or performance considerations, leaving gaps for a tool that performs file operations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the core purpose. Each sentence adds value: the first defines the action, the second details the output content, and the third provides usage examples. There's no redundant information, though it could be slightly more streamlined.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (file generation with structured planning), no annotations, and no output schema, the description is moderately complete. It covers the purpose, output format, and usage context, but lacks details on behavioral aspects like error handling, file overwriting risks, or output validation. For a tool that writes files, more transparency would be beneficial.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('project_path' and 'user_request') adequately. The description adds minimal value beyond the schema by implying how parameters are used ('from your request and codebase'), but doesn't provide additional syntax, format details, or constraints. Baseline 3 is appropriate when the schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Generate tasks.md (implementation plan & task breakdown) from your request and codebase.' It specifies the verb ('Generate'), resource ('tasks.md'), and what it contains ('overview, numbered requirements using EARS, implementable tasks...'). However, it doesn't explicitly differentiate from sibling tools like 'task-checker' or 'task-orchestrator' beyond listing use-case synonyms.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear usage context with phrases like 'Use for “task breakdown”, “create tasks.md”, “implementation plan”, or “roadmap”' and specifies the input source ('from your request and codebase'). It doesn't explicitly state when NOT to use this tool or name alternatives among siblings, but the context is sufficiently clear for typical use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/karol-f/spec-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server