Skip to main content
Glama

workflowy_move_node

Move a WorkFlowy node to a different parent location to reorganize outlines and task hierarchies. Specify node ID, new parent ID, and position for structured workflow management.

Instructions

Move a WorkFlowy node to a new parent

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
node_idYes
parent_idNo
positionNotop

Implementation Reference

  • Registration and handler function for the MCP tool 'workflowy_move_node'. Calls the client's move_node with rate limiting.
    @mcp.tool(name="workflowy_move_node", description="Move a WorkFlowy node to a new parent") async def move_node( node_id: str, parent_id: str | None = None, position: str = "top", ) -> bool: """Move a node to a new parent. Args: node_id: The ID of the node to move parent_id: The new parent node ID (UUID, target key like 'inbox', or None for root) position: Where to place the node ('top' or 'bottom', default 'top') Returns: True if move was successful """ client = get_client() if _rate_limiter: await _rate_limiter.acquire() try: success = await client.move_node(node_id, parent_id, position) if _rate_limiter: _rate_limiter.on_success() return success except Exception as e: if _rate_limiter and hasattr(e, "__class__") and e.__class__.__name__ == "RateLimitError": _rate_limiter.on_rate_limit(getattr(e, "retry_after", None)) raise
  • Core implementation of move_node in WorkFlowyClientCore with full retry logic, rate limiting delays, error handling, and cache dirty marking.
    async def move_node( self, node_id: str, parent_id: str | None = None, position: str = "top", max_retries: int = 10, ) -> bool: """Move a node to a new parent with exponential backoff retry. Args: node_id: The ID of the node to move parent_id: The new parent node ID (UUID, target key like 'inbox', or None for root) position: Where to place the node ('top' or 'bottom', default 'top') max_retries: Maximum retry attempts (default 10) Returns: True if move was successful """ import asyncio logger = _ClientLogger() retry_count = 0 base_delay = 1.0 while retry_count < max_retries: # Force delay at START of each iteration (rate limit protection) await asyncio.sleep(API_RATE_LIMIT_DELAY) try: payload = {"position": position} if parent_id is not None: payload["parent_id"] = parent_id response = await self.client.post(f"/nodes/{node_id}/move", json=payload) data = await self._handle_response(response) # API returns {"status": "ok"} success = data.get("status") == "ok" if success: # Best-effort: mark this node (and its new parent, if any) # as dirty so path-based exports will refresh as needed. try: ids: list[str] = [node_id] if parent_id is not None: ids.append(parent_id) self._mark_nodes_export_dirty(ids) except Exception: # Cache dirty marking must never affect API behavior pass return success except RateLimitError as e: retry_count += 1 retry_after = getattr(e, 'retry_after', None) or (base_delay * (2 ** retry_count)) logger.warning( f"Rate limited on move_node. Retry after {retry_after}s. " f"Attempt {retry_count}/{max_retries}" ) if retry_count < max_retries: await asyncio.sleep(retry_after) else: raise except NetworkError as e: retry_count += 1 logger.warning( f"Network error on move_node: {e}. Retry {retry_count}/{max_retries}" ) if retry_count < max_retries: await asyncio.sleep(base_delay * (2 ** retry_count)) else: raise except httpx.TimeoutException as err: retry_count += 1 logger.warning( f"Timeout error: {err}. Retry {retry_count}/{max_retries}" ) if retry_count < max_retries: await asyncio.sleep(base_delay * (2 ** retry_count)) else: raise TimeoutError("move_node") from err raise NetworkError("move_node failed after maximum retries")

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/daniel347x/workflowy-mcp-fixed'

If you have feedback or need assistance with the MCP directory API, please join our Discord server