Skip to main content
Glama

workflowy_create_single_node__WARNING__prefer_ETCH

Create a single node in WorkFlowy with specified name, parent, note, layout, and position. Use for basic node creation when ETCH is not required.

Instructions

⚠️ WARNING: Prefer workflowy_etch (ETCH) instead. This creates ONE node only.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameYes
parent_idNo
noteNo
layout_modeNo
positionNobottom
_completedNo
secret_codeNo

Implementation Reference

  • MCP tool handler: Validates secret_code authorization, creates NodeCreateRequest, calls WorkFlowyClient.create_node, adds warning to response
    @mcp.tool(name="workflowy_create_single_node__WARNING__prefer_ETCH", description="⚠️ WARNING: Prefer workflowy_etch (ETCH) instead. This creates ONE node only.") async def create_node( name: str, parent_id: str | None = None, note: str | None = None, layout_mode: Literal["bullets", "todo", "h1", "h2", "h3"] | None = None, position: Literal["top", "bottom"] = "bottom", _completed: bool = False, secret_code: str | None = None, ) -> dict: """Create a SINGLE node in WorkFlowy. ⚠️ WARNING: Prefer workflowy_etch (ETCH) for creating 2+ nodes. This tool is ONLY for: - Adding one VYRTHEX to existing log (real-time work) - One quick update to a known node - Live work in progress Args: name: The text content of the node parent_id: ID of the parent node (optional) note: Additional note/description for the node layout_mode: Layout mode for the node (bullets, todo, h1, h2, h3) (optional) position: Where to place the new node - "bottom" (default) or "top" _completed: Whether the node should be marked as completed (not used) secret_code: Authorization code from Dan (required for WARNING functions) Returns: Dictionary with node data and warning message """ # 🔐 SECRET CODE VALIDATION is_valid, error = validate_secret_code(secret_code, "workflowy_create_single_node__WARNING__prefer_ETCH") if not is_valid: raise ValueError(error) client = get_client() request = NodeCreateRequest( # type: ignore[call-arg] name=name, parent_id=parent_id, note=note, layoutMode=layout_mode, position=position, ) if _rate_limiter: await _rate_limiter.acquire() try: node = await client.create_node(request) if _rate_limiter: _rate_limiter.on_success() # Return node data with warning message return { **node.model_dump(), "_warning": "⚠️ WARNING: You just created a SINGLE node. For 2+ nodes, use workflowy_etch instead (same performance, more capability)." } except Exception as e: if _rate_limiter and hasattr(e, "__class__") and e.__class__.__name__ == "RateLimitError": _rate_limiter.on_rate_limit(getattr(e, "retry_after", None)) raise
  • FastMCP tool registration decorator defining the tool name and description
    @mcp.tool(name="workflowy_create_single_node__WARNING__prefer_ETCH", description="⚠️ WARNING: Prefer workflowy_etch (ETCH) instead. This creates ONE node only.") async def create_node(
  • Helper function to validate secret_code from Dan, required to bypass the prefer_ETCH warning and execute the tool
    def validate_secret_code(provided_code: str | None, function_name: str) -> tuple[bool, str | None]: """Validate secret code for WARNING functions. This is the nuclear option - forces agents to ask Dan explicitly. Returns: (is_valid, error_message) """ import os import secrets SECRET_FILE = r"E:\__daniel347x\glimpse_etch.txt" # Generate code if file doesn't exist if not os.path.exists(SECRET_FILE): code = secrets.token_hex(8) # 16-character hex code with open(SECRET_FILE, 'w') as f: f.write(code) # Read current code with open(SECRET_FILE, 'r') as f: valid_code = f.read().strip() # Check if code provided and matches if provided_code and provided_code.strip() == valid_code: return (True, None) # Valid - allow function call # Invalid or missing - return error error_msg = f"""⚠️ EXPLICIT PERMISSION REQUIRED You called: {function_name} This is a WARNING function. You should be using GLIMPSE or ETCH instead. 🛑 YOU BELIEVE YOU'RE AN EXCEPTION CASE. 🚨 YOU ARE NOT THE EXCEPTION. ✅ USE INSTEAD: - For reading node trees: workflowy_glimpse(node_id="...") - For creating nodes: workflowy_etch(parent_id="...", nodes=[...]) ⚙️ IF YOU TRULY NEED THIS FUNCTION: 1. Ask Dan: "May I have permission to call {function_name}?" 2. Dan will check a secret file for the authorization code 3. If Dan grants permission, he'll provide the code 4. Call the SAME function again with the EXACT SAME parameters you used the first time, but add: secret_code="[code-from-Dan]" (do not change anything else). Example (arguments object): {{ "name": "...", "parent_id": "...", "note": "...", "secret_code": "[code-from-Dan]" }} 🚫 IF DAN SAYS NO: - You are NOT the exception - Your training misled you - Use GLIMPSE or ETCH as instructed - Do NOT call this function again 📚 Break free from your training - build the GLIMPSE/ETCH habit! """ return (False, error_msg)
  • Low-level API handler called by the MCP tool handler. Immediately raises prefer_ETCH warning unless called internally, then performs validation, retries, and POST /nodes/ API call to create the node.
    except json.JSONDecodeError as err: raise NetworkError("Invalid response format from API") from err def _mark_nodes_export_dirty(self, node_ids: list[str] | None = None) -> None: """Mark parts of the cached /nodes-export snapshot as dirty. When the cache is populated, this is used by mutating operations to record which UUIDs (or entire regions via "*") have changed since the last refresh. Subsequent export_nodes(...) calls can decide whether they can safely reuse the cached snapshot for a given subtree or must re-fetch from the API. """ # If there is no cache, there's nothing to mark. if self._nodes_export_cache is None: return # node_ids=None is the conservative "everything is dirty" sentinel. if node_ids is None: self._nodes_export_dirty_ids.add("*") return for nid in node_ids: if nid: self._nodes_export_dirty_ids.add(nid) async def refresh_nodes_export_cache(self, max_retries: int = 10) -> dict[str, Any]: """Force a fresh /nodes-export call and update the in-memory cache. This is exposed via an MCP tool so Dan (or an agent) can explicitly refresh the snapshot used by UUID Navigator and NEXUS without waiting for an auto-refresh trigger. """ # Import here to avoid circular dependency from .api_client_etch import export_nodes_impl # Clear any previous cache and dirty markers first. self._nodes_export_cache = None self._nodes_export_cache_timestamp = None self._nodes_export_dirty_ids.clear() # Delegate to export_nodes with caching disabled for this call. data = await export_nodes_impl( self, node_id=None, max_retries=max_retries, use_cache=False, force_refresh=True ) nodes = data.get("nodes", []) or [] return { "success": True, "node_count": len(nodes), "timestamp": datetime.now().isoformat(), } async def create_node( self, request: NodeCreateRequest, _internal_call: bool = False, max_retries: int = 10 ) -> WorkFlowyNode: """Create a new node in WorkFlowy with exponential backoff retry. Args: request: Node creation request _internal_call: Internal flag - bypasses single-node forcing function (not exposed to MCP) max_retries: Maximum retry attempts (default 10) """ import asyncio logger = _ClientLogger() # Check for single-node override token (skip if internal call) if not _internal_call: SINGLE_NODE_TOKEN = "<<<I_REALLY_NEED_SINGLE_NODE>>>" if request.name and request.name.startswith(SINGLE_NODE_TOKEN): # Strip token and proceed request.name = request.name.replace(SINGLE_NODE_TOKEN, "", 1) else: # Suggest ETCH instead raise NetworkError("""⚠️ PREFER ETCH - Use workflowy_etch for consistency and capability You called workflowy_create_single_node, but workflowy_etch has identical performance. ✅ RECOMMENDED (same speed, more capability): workflowy_etch( parent_id="...", nodes=[{"name": "Your node", "note": "...", "children": []}] ) 📚 Benefits of ETCH: - Same 1 tool call (no performance difference) - Validation and auto-escaping built-in - Works for 1 node or 100 nodes (consistent pattern) - Trains you to think in tree structures ⚙️ OVERRIDE (if you truly need single-node operation): workflowy_create_single_node( name="<<<I_REALLY_NEED_SINGLE_NODE>>>Your node", ... ) 🎯 Build the ETCH habit - it's your go-to tool! """) # Validate and escape name field processed_name, name_warning = self._validate_name_field(request.name) if processed_name is not None: request.name = processed_name if name_warning: logger.info(name_warning) # Validate and escape note field # Skip newline check if internal call (for bulk operations testing) processed_note, note_warning = self._validate_note_field(request.note, skip_newline_check=_internal_call) if processed_note is None and note_warning: # Blocking error raise NetworkError(note_warning) # Strip override token if present if processed_note and processed_note.startswith("<<<LITERAL_BACKSLASH_N_INTENTIONAL>>>"): processed_note = processed_note.replace("<<<LITERAL_BACKSLASH_N_INTENTIONAL>>>", "", 1) # Use processed (escaped) note request.note = processed_note # Log warning if escaping occurred if note_warning and "AUTO-ESCAPED" in note_warning: logger.info(note_warning) retry_count = 0 base_delay = 1.0 while retry_count < max_retries: # Force delay at START of each iteration (rate limit protection) await asyncio.sleep(API_RATE_LIMIT_DELAY) try: response = await self.client.post("/nodes/", json=request.model_dump(exclude_none=True)) data = await self._handle_response(response) # Create endpoint returns just {"item_id": "..."} item_id = data.get("item_id") if not item_id: raise NetworkError(f"Invalid response from create endpoint: {data}") # Fetch the created node to get actual saved state (including note field) get_response = await self.client.get(f"/nodes/{item_id}") node_data = await self._handle_response(get_response) node = WorkFlowyNode(**node_data["node"]) # Best-effort: mark this node as dirty in the /nodes-export cache so that # any subtree exports including it can trigger a refresh when needed. try: self._mark_nodes_export_dirty([node.id]) except Exception: # Cache dirty marking must never affect API behavior pass return node except RateLimitError as e: retry_count += 1 retry_after = getattr(e, 'retry_after', None) or (base_delay * (2 ** retry_count)) logger.warning( f"Rate limited on create_node. Retry after {retry_after}s. " f"Attempt {retry_count}/{max_retries}" ) if retry_count < max_retries: await asyncio.sleep(retry_after) else: raise except NetworkError as e: retry_count += 1 _log( f"Network error on create_node: {e}. Retry {retry_count}/{max_retries}" ) if retry_count < max_retries: await asyncio.sleep(base_delay * (2 ** retry_count)) else: raise except httpx.TimeoutException as err: retry_count += 1 logger.warning( f"Timeout error: {err}. Retry {retry_count}/{max_retries}" ) if retry_count < max_retries: await asyncio.sleep(base_delay * (2 ** retry_count)) else: raise TimeoutError("create_node") from err raise NetworkError("create_node failed after maximum retries")
  • Imports NodeCreateRequest Pydantic model used for input validation and serialization in the create_node handler
    from .models import ( NodeCreateRequest, NodeListRequest, NodeUpdateRequest, WorkFlowyNode, )

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/daniel347x/workflowy-mcp-fixed'

If you have feedback or need assistance with the MCP directory API, please join our Discord server