Skip to main content
Glama

workflowy_delete_node

Remove a node and its sub-items from WorkFlowy outlines to manage hierarchical task lists and maintain organized workflows.

Instructions

Delete a WorkFlowy node and all its children

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
node_idYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • MCP tool handler function that registers the 'workflowy_delete_node' tool via @mcp.tool decorator. Acquires rate limiter, calls WorkFlowyClient.delete_node(node_id), handles exceptions including rate limits, and returns success status with deleted ID.
    @mcp.tool(name="workflowy_delete_node", description="Delete a WorkFlowy node and all its children")
    async def delete_node(node_id: str) -> dict:
        """Delete a WorkFlowy node and all its children.
    
        Args:
            node_id: The ID of the node to delete
    
        Returns:
            Dictionary with success status
        """
        client = get_client()
    
        if _rate_limiter:
            await _rate_limiter.acquire()
    
        try:
            success = await client.delete_node(node_id)
            if _rate_limiter:
                _rate_limiter.on_success()
            return {"success": success, "deleted_id": node_id}
        except Exception as e:
            if _rate_limiter and hasattr(e, "__class__") and e.__class__.__name__ == "RateLimitError":
                _rate_limiter.on_rate_limit(getattr(e, "retry_after", None))
            raise
  • Core HTTP API implementation in WorkFlowyClientCore.delete_node: sends DELETE /nodes/{node_id}, implements exponential backoff retries for rate limits/timeouts/network errors, marks nodes_export cache dirty, logs to reconcile file on retries.
    async def delete_node(self, node_id: str, max_retries: int = 10) -> bool:
        """Delete a node and all its children with exponential backoff retry.
        
        Args:
            node_id: The ID of the node to delete
            max_retries: Maximum retry attempts (default 10)
        """
        import asyncio
        from .api_client_etch import _log_to_file_helper
    
        logger = _ClientLogger()
        retry_count = 0
        base_delay = 1.0
        
        while retry_count < max_retries:
            # Force delay at START of each iteration (rate limit protection)
            await asyncio.sleep(API_RATE_LIMIT_DELAY)
            
            try:
                response = await self.client.delete(f"/nodes/{node_id}")
                # Delete endpoint returns just a message, not nested data
                await self._handle_response(response)
                # If we reached here after one or more retries, log success to reconcile log
                if retry_count > 0:
                    success_msg = (
                        f"delete_node {node_id} succeeded after {retry_count + 1}/{max_retries} attempts "
                        f"following rate limiting or transient errors."
                    )
                    logger.info(success_msg)
                    _log_to_file_helper(success_msg, "reconcile")
    
                # Best-effort: mark this node as dirty so any subsequent
                # /nodes-export-based operations that rely on it will trigger
                # a refresh when needed.
                try:
                    self._mark_nodes_export_dirty([node_id])
                except Exception:
                    # Cache dirty marking must never affect API behavior
                    pass
    
                return True
                
            except RateLimitError as e:
                retry_count += 1
                retry_after = getattr(e, 'retry_after', None) or (base_delay * (2 ** retry_count))
                retry_msg = (
                    f"Rate limited on delete_node {node_id}. Retry after {retry_after}s. "
                    f"Attempt {retry_count}/{max_retries}"
                )
                logger.warning(retry_msg)
                _log_to_file_helper(retry_msg, "reconcile")
                
                if retry_count < max_retries:
                    await asyncio.sleep(retry_after)
                else:
                    final_msg = (
                        f"delete_node {node_id} exhausted retries ({retry_count}/{max_retries}) "
                        f"due to rate limiting – aborting."
                    )
                    logger.error(final_msg)
                    _log_to_file_helper(final_msg, "reconcile")
                    raise
                    
            except NetworkError as e:
                retry_count += 1
                logger.warning(
                    f"Network error on delete_node: {e}. Retry {retry_count}/{max_retries}"
                )
                
                if retry_count < max_retries:
                    await asyncio.sleep(base_delay * (2 ** retry_count))
                else:
                    raise
                    
            except httpx.TimeoutException as err:
                retry_count += 1
                
                logger.warning(
                    f"Timeout error: {err}. Retry {retry_count}/{max_retries}"
                )
                
                if retry_count < max_retries:
                    await asyncio.sleep(base_delay * (2 ** retry_count))
                else:
                    raise TimeoutError("delete_node") from err
        
        raise NetworkError("delete_node failed after maximum retries")
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. While it states the tool deletes a node and its children (implying a destructive, irreversible action), it fails to mention critical details like required permissions, error handling (e.g., if the node_id is invalid), or confirmation prompts. For a destructive tool with zero annotation coverage, this is a significant gap.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core action ('Delete') and resource. There is no wasted verbiage, making it highly concise and well-structured for quick comprehension.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's destructive nature, lack of annotations, and an output schema (which may cover return values), the description is minimally adequate but incomplete. It states what the tool does but omits usage context, parameter semantics, and behavioral risks. The presence of an output schema prevents a lower score, but more detail is needed for safe operation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 1 parameter with 0% description coverage, and the description does not explain what 'node_id' represents (e.g., a unique identifier from WorkFlowy, how to obtain it, or format constraints). Since schema coverage is low (<50%), the description should compensate but adds no parameter details, resulting in a baseline score of 3 due to the single parameter's simplicity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Delete') and resource ('a WorkFlowy node and all its children'), distinguishing it from sibling tools like workflowy_update_node or workflowy_move_node. It precisely defines the scope of deletion (node plus children), making the purpose unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives (e.g., workflowy_uncomplete_node for marking incomplete, workflowy_move_node for relocation, or workflowy_etch for batch operations). It lacks context about prerequisites, such as needing the node_id from a prior operation, or warnings about irreversible deletion.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/daniel347x/workflowy-mcp-fixed'

If you have feedback or need assistance with the MCP directory API, please join our Discord server