workflowy_complete_node
Mark a WorkFlowy node as completed to track task progress and maintain organized outlines through the WorkFlowy MCP Server.
Instructions
Mark a WorkFlowy node as completed
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| node_id | Yes |
Implementation Reference
- src/workflowy_mcp/server.py:1221-1245 (handler)MCP tool registration and handler for workflowy_complete_node. Thin wrapper around WorkFlowyClient.complete_node with global rate limiting.@mcp.tool(name="workflowy_complete_node", description="Mark a WorkFlowy node as completed") async def complete_node(node_id: str) -> WorkFlowyNode: """Mark a WorkFlowy node as completed. Args: node_id: The ID of the node to complete Returns: The updated WorkFlowy node """ client = get_client() if _rate_limiter: await _rate_limiter.acquire() try: node = await client.complete_node(node_id) if _rate_limiter: _rate_limiter.on_success() return node except Exception as e: if _rate_limiter and hasattr(e, "__class__") and e.__class__.__name__ == "RateLimitError": _rate_limiter.on_rate_limit(getattr(e, "retry_after", None)) raise
- Core implementation of complete_node in the API client. Performs POST to /nodes/{node_id}/complete, fetches updated node via GET, handles retries for rate limits, network errors, and timeouts.async def complete_node(self, node_id: str, max_retries: int = 10) -> WorkFlowyNode: """Mark a node as completed with exponential backoff retry.""" import asyncio logger = _ClientLogger() retry_count = 0 base_delay = 1.0 while retry_count < max_retries: # Force delay at START of each iteration (rate limit protection) await asyncio.sleep(API_RATE_LIMIT_DELAY) try: response = await self.client.post(f"/nodes/{node_id}/complete") data = await self._handle_response(response) # API returns {"status": "ok"} - fetch updated node if isinstance(data, dict) and data.get('status') == 'ok': get_response = await self.client.get(f"/nodes/{node_id}") node_data = await self._handle_response(get_response) return WorkFlowyNode(**node_data["node"]) else: # Fallback for unexpected format return WorkFlowyNode(**data) except RateLimitError as e: retry_count += 1 retry_after = getattr(e, 'retry_after', None) or (base_delay * (2 ** retry_count)) logger.warning( f"Rate limited on complete_node. Retry after {retry_after}s. " f"Attempt {retry_count}/{max_retries}" ) if retry_count < max_retries: await asyncio.sleep(retry_after) else: raise except NetworkError as e: retry_count += 1 logger.warning( f"Network error on complete_node: {e}. Retry {retry_count}/{max_retries}" ) if retry_count < max_retries: await asyncio.sleep(base_delay * (2 ** retry_count)) else: raise except httpx.TimeoutException as err: retry_count += 1 logger.warning( f"Timeout error: {err}. Retry {retry_count}/{max_retries}" ) if retry_count < max_retries: await asyncio.sleep(base_delay * (2 ** retry_count)) else: raise TimeoutError("complete_node") from err raise NetworkError("complete_node failed after maximum retries")