Skip to main content
Glama

workflowy_uncomplete_node

Mark a WorkFlowy node as not completed to track incomplete tasks or items in your outline.

Instructions

Mark a WorkFlowy node as not completed

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
node_idYes

Implementation Reference

  • MCP tool handler function for workflowy_uncomplete_node. Registers the tool and implements the logic by calling the WorkFlowyClient.uncomplete_node method with rate limiting.
    @mcp.tool(name="workflowy_uncomplete_node", description="Mark a WorkFlowy node as not completed")
    async def uncomplete_node(node_id: str) -> WorkFlowyNode:
        """Mark a WorkFlowy node as not completed.
    
        Args:
            node_id: The ID of the node to uncomplete
    
        Returns:
            The updated WorkFlowy node
        """
        client = get_client()
    
        if _rate_limiter:
            await _rate_limiter.acquire()
    
        try:
            node = await client.uncomplete_node(node_id)
            if _rate_limiter:
                _rate_limiter.on_success()
            return node
        except Exception as e:
            if _rate_limiter and hasattr(e, "__class__") and e.__class__.__name__ == "RateLimitError":
                _rate_limiter.on_rate_limit(getattr(e, "retry_after", None))
            raise
  • Core API client implementation of uncomplete_node. Performs POST to /nodes/{node_id}/uncomplete, handles retries and rate limits, fetches and returns the updated node.
    async def uncomplete_node(self, node_id: str, max_retries: int = 10) -> WorkFlowyNode:
        """Mark a node as not completed with exponential backoff retry."""
        import asyncio
    
        logger = _ClientLogger()
        retry_count = 0
        base_delay = 1.0
    
        while retry_count < max_retries:
            # Force delay at START of each iteration (rate limit protection)
            await asyncio.sleep(API_RATE_LIMIT_DELAY)
    
            try:
                response = await self.client.post(f"/nodes/{node_id}/uncomplete")
                data = await self._handle_response(response)
                # API returns {"status": "ok"} - fetch updated node
                if isinstance(data, dict) and data.get('status') == 'ok':
                    get_response = await self.client.get(f"/nodes/{node_id}")
                    node_data = await self._handle_response(get_response)
                    return WorkFlowyNode(**node_data["node"])
                else:
                    # Fallback for unexpected format
                    return WorkFlowyNode(**data)
    
            except RateLimitError as e:
                retry_count += 1
                retry_after = getattr(e, 'retry_after', None) or (base_delay * (2 ** retry_count))
                logger.warning(
                    f"Rate limited on uncomplete_node. Retry after {retry_after}s. "
                    f"Attempt {retry_count}/{max_retries}"
                )
                if retry_count < max_retries:
                    await asyncio.sleep(retry_after)
                else:
                    raise
    
            except NetworkError as e:
                retry_count += 1
                logger.warning(
                    f"Network error on uncomplete_node: {e}. Retry {retry_count}/{max_retries}"
                )
                if retry_count < max_retries:
                    await asyncio.sleep(base_delay * (2 ** retry_count))
                else:
                    raise
    
            except httpx.TimeoutException as err:
                retry_count += 1
                
                logger.warning(
                    f"Timeout error: {err}. Retry {retry_count}/{max_retries}"
                )
                
                if retry_count < max_retries:
                    await asyncio.sleep(base_delay * (2 ** retry_count))
                else:
                    raise TimeoutError("uncomplete_node") from err
    
        raise NetworkError("uncomplete_node failed after maximum retries")

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/daniel347x/workflowy-mcp-fixed'

If you have feedback or need assistance with the MCP directory API, please join our Discord server