Skip to main content
Glama

brain_dump_tasks

Convert natural language task lists into organized todos by parsing priorities, timeframes, and categories for ADHD-friendly productivity management.

Instructions

ADHD-friendly brain dump: paste a list of tasks and get them into the system.

Workflow:

  1. Parse natural language task list

  2. Optionally create all todos in batch

  3. Return summary with suggestions

Supports multiple formats:

  • Bullet points: "- Task 1"

  • Numbered lists: "1. Task 2"

  • Checkboxes: "[ ] Task 3"

  • Section headers: "Sprint work:\n- Task 4"

Smart extraction:

  • Priority: "#high", "URGENT:", "!!"

  • Timeframe: "this week", "(this month)", "someday"

  • Time: "(30min)", "~2h"

  • Energy: "[high energy]", "[low effort]"

  • Theme: "#sprint", "@admin"

Example:

brain_dump_tasks('''
Sprint work (due 11/18):
- URGENT: Smoke test GPU endpoints (2h)
- Deploy to UTest environment

Strategic (this month):
- Draft platform pitch outline (30min) [low effort]
- Research Platform One examples

Quick wins:
- Check ADO board
- Reply to emails
''')

Args: text: Natural language task list auto_create: Create todos immediately (default: True) default_priority: Default priority for unparsed tasks default_timeframe: Default timeframe for unparsed tasks

Returns: Summary of created tasks with suggestions

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYes
auto_createNo
default_priorityNomedium
default_timeframeNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The 'brain_dump_tasks' tool handles parsing natural language task lists and optionally batch adding them to the system.
    async def brain_dump_tasks(
        text: str,
        auto_create: bool = True,
        default_priority: str = "medium",
        default_timeframe: Optional[str] = None,
    ) -> str:
        """ADHD-friendly brain dump: paste a list of tasks and get them into the system.
    
        Workflow:
        1. Parse natural language task list
        2. Optionally create all todos in batch
        3. Return summary with suggestions
    
        Supports multiple formats:
        - Bullet points: "- Task 1"
        - Numbered lists: "1. Task 2"
        - Checkboxes: "[ ] Task 3"
        - Section headers: "Sprint work:\\n- Task 4"
    
        Smart extraction:
        - Priority: "#high", "URGENT:", "!!"
        - Timeframe: "this week", "(this month)", "someday"
        - Time: "(30min)", "~2h"
        - Energy: "[high energy]", "[low effort]"
        - Theme: "#sprint", "@admin"
    
        Example:
        ```
        brain_dump_tasks('''
        Sprint work (due 11/18):
        - URGENT: Smoke test GPU endpoints (2h)
        - Deploy to UTest environment
    
        Strategic (this month):
        - Draft platform pitch outline (30min) [low effort]
        - Research Platform One examples
    
        Quick wins:
        - Check ADO board
        - Reply to emails
        ''')
        ```
    
        Args:
            text: Natural language task list
            auto_create: Create todos immediately (default: True)
            default_priority: Default priority for unparsed tasks
            default_timeframe: Default timeframe for unparsed tasks
    
        Returns:
            Summary of created tasks with suggestions
        """
        db = await storage.get_db()
    
        # Parse the text
        parse_result = parse_natural_language_task_list(
            text, default_priority, default_timeframe
        )
    
        parsed_todos = parse_result["parsed_todos"]
    
        if not auto_create:
            # Preview mode
            response = f"PREVIEW MODE - {parse_result['parse_summary']}\n\n"
            response += f"Would create {len(parsed_todos)} todos:\n"
            for todo in parsed_todos[:10]:  # Show first 10
                response += f"  • {todo['title']}"
                if todo.get('priority') != 'medium':
                    response += f" ({todo['priority']} priority)"
                if todo.get('timeframe'):
                    response += f" [{todo['timeframe']}]"
                response += "\n"
    
            if len(parsed_todos) > 10:
                response += f"  ... and {len(parsed_todos) - 10} more\n"
    
            response += "\nCall again with auto_create=True to create these todos."
            return response
    
        # Create todos in batch
        if parsed_todos:
            create_result = await add_todos_batch(parsed_todos, db, auto_categorize=True)
            created_todos = create_result["created_todos"]
            suggestions = create_result["suggestions"]
        else:
            return "No tasks parsed. Check formatting."
    
        # Build response
        response = f"✓ Created {len(created_todos)} tasks from brain dump\n\n"
    
        # Show created tasks by category
        by_priority = {"high": [], "medium": [], "low": []}
        quick_wins = []
    
        for todo in created_todos:
            if todo.get('quick') or (todo.get('time_estimate', 999) <= 30):
                quick_wins.append(todo)
            by_priority[todo.get('priority', 'medium')].append(todo)
    
        if by_priority['high']:
            response += f"High priority: {len(by_priority['high'])} tasks\n"
        if quick_wins:
            response += f"Quick wins: {len(quick_wins)} tasks\n"
    
        # Show suggestions
        if suggestions:
            response += "\nAuto-categorization:\n"
            for suggestion in suggestions[:5]:  # Limit to 5
                response += f"  • {suggestion}\n"
    
        # Identify tasks needing review
        needs_review = []
        for todo in created_todos:
            if not todo.get('timeframe'):
                needs_review.append(f"#{todo['id']}: {todo['title']} - no timeframe assigned")
            elif todo.get('time_estimate', 0) > 120:
                needs_review.append(
                    f"#{todo['id']}: {todo['title']} - large task ({todo['time_estimate']}min), consider breaking down"
                )
    
        if needs_review:
            response += f"\n⚠️ {len(needs_review)} tasks need review:\n"
            for review in needs_review[:3]:  # Show first 3
                response += f"  • {review}\n"
    
        response += f"\nRun 'start_my_day' to see which tasks are selected for today."
    
        return response
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. Effectively discloses parsing behavior (multiple formats supported), extraction logic (smart detection of #high, [30min], etc.), and batch mutation behavior ('Create todos immediately'). Lacks error handling details or rate limits, but strong on functional behavior.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Lengthy but well-structured with clear headers (Workflow, Supports multiple formats, Smart extraction, Example). Information-dense given complexity; the example code block efficiently demonstrates syntax. Minor verbosity in format examples is justified by 0% schema coverage.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Comprehensive for a complex NLP tool: covers input formats, extraction patterns, parameters, and return value (summary with suggestions). Output schema exists per context signals, so brief return description is appropriate. Could strengthen by mentioning relationship to delete_todo for undoing batch creations.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema has 0% description coverage (titles only). Description fully compensates via 'Args' section documenting all 4 parameters: text (natural language task list), auto_create (immediate creation toggle), and defaults for priority/timeframe. Critical for agent to understand 'auto_create' controls the destructive/creation behavior.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Opens with specific value proposition ('ADHD-friendly brain dump') and core mechanism ('paste a list of tasks and get them into the system'). Distinguishes from siblings like add_todo (single task) and parse_task_list via emphasis on bulk creation, natural language parsing, and smart metadata extraction (priority, timeframe, energy).

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Implies usage context through 'ADHD-friendly' and workflow description (bulk unstructured input vs. careful single entry), but lacks explicit when-to-use guidance distinguishing it from parse_task_list or add_todo. No mention of prerequisites or when auto_create should be false.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/94aharris/coach-ai'

If you have feedback or need assistance with the MCP directory API, please join our Discord server