Skip to main content
Glama

break_down_task

Decomposes large tasks into manageable subtasks using Claude AI, then automatically stores them in the database for organized execution.

Instructions

Use LLM to decompose large task into smaller subtasks.

This returns a prompt for Claude to generate subtasks. After Claude responds, call this tool again with the response to create the subtasks in the database.

Workflow:

  1. Call break_down_task(todo_id=X) -> Returns prompt

  2. Send prompt to Claude

  3. Claude returns JSON with subtasks

  4. Subtasks are automatically created in database

Args: todo_id: ID of the task to break down subtask_count: Target number of subtasks (default: 5)

Returns: Prompt for Claude to generate subtasks, or confirmation if creating them

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
todo_idYes
subtask_countNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The `break_down_task` function, decorated with `@mcp.tool()`, implements the tool handler. It fetches the task from the database and uses `break_down_task_with_claude` to generate a prompt for task decomposition.
    async def break_down_task(todo_id: int, subtask_count: int = 5) -> str:
        """Use LLM to decompose large task into smaller subtasks.
    
        This returns a prompt for Claude to generate subtasks. After Claude responds,
        call this tool again with the response to create the subtasks in the database.
    
        Workflow:
        1. Call break_down_task(todo_id=X) -> Returns prompt
        2. Send prompt to Claude
        3. Claude returns JSON with subtasks
        4. Subtasks are automatically created in database
    
        Args:
            todo_id: ID of the task to break down
            subtask_count: Target number of subtasks (default: 5)
    
        Returns:
            Prompt for Claude to generate subtasks, or confirmation if creating them
        """
        db = await storage.get_db()
    
        # Get todo
        cursor = await db.execute(
            """
            SELECT id, title, priority, notes, timeframe, theme_tag, time_estimate
            FROM todos WHERE id = ?
            """,
            (todo_id,),
        )
        row = await cursor.fetchone()
        if not row:
            return f"Error: Todo #{todo_id} not found"
    
        todo = dict(row)
    
        # Generate prompt for Claude
        breakdown = break_down_task_with_claude(todo, subtask_count)
    
        response = f"**Breaking down task #{todo_id}: {todo['title']}**\n\n"
        response += "I'll help break this down into smaller steps. Here's what I recommend:\n\n"
        response += breakdown['prompt']
        response += "\n\n*Note: This is a prompt for planning. The subtasks will be created automatically based on the breakdown.*"
    
        return response
  • The `break_down_task_with_claude` helper function generates the prompt for an LLM to decompose a task into subtasks.
    async def break_down_task_with_claude(
        todo: dict[str, Any],
        subtask_count: int = 5,
    ) -> dict[str, Any]:
        """
        Use Claude (via prompt) to break down a large task into smaller subtasks
    
        This function returns a prompt that should be sent to Claude.
        The caller (MCP tool) will handle the actual LLM call.
    
        Args:
            todo: The todo dict to break down
            subtask_count: Target number of subtasks
    
        Returns:
            Dict with prompt for LLM
        """
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses the database mutation ('create the subtasks in the database') and LLM dependency ('Use LLM'), plus the unusual two-call behavioral pattern. However, it lacks details on error handling, idempotency concerns, or what happens if the todo_id doesn't exist.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description uses clear structural sections (Workflow, Args, Returns) that front-load the most critical information. The 4-step workflow is necessary given the tool's complexity. Only minor redundancy exists in describing the return value when an output schema is present, but this is justified given the 0% schema coverage elsewhere.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (two-phase LLM interaction with database mutations), the description adequately covers the invocation pattern, parameters, and return behavior. While it mentions the output schema exists, it appropriately documents the dual return type (prompt vs confirmation) given the poor schema coverage, providing sufficient context for correct usage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema has 0% description coverage (no descriptions on properties), but the description compensates with an 'Args:' section that documents both todo_id ('ID of the task to break down') and subtask_count ('Target number of subtasks (default: 5)'). It adds the semantic meaning missing from the schema structure.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool uses an LLM to 'decompose large task into smaller subtasks,' specifying the verb, resource, and method. It distinguishes itself from sibling tools like add_todo by emphasizing the AI-driven decomposition aspect, though it could clarify what constitutes a 'large' task warranting this treatment.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit workflow guidance with a numbered 4-step process: call tool to get prompt, send to Claude, receive JSON, and automatic database creation. It clearly explains the two-phase invocation pattern (first call returns prompt, second call creates subtasks), which is critical for correct sequencing.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/94aharris/coach-ai'

If you have feedback or need assistance with the MCP directory API, please join our Discord server