n8n_update_partial_workflow
Apply incremental changes to n8n workflows using specific diff operations like adding nodes, updating connections, or modifying settings without redeploying entire workflows.
Instructions
Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, moveNode, enable/disableNode, addConnection, removeConnection, updateSettings, updateName, add/removeTag. See tools_documentation("n8n_update_partial_workflow", "full") for details.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| id | Yes | Workflow ID to update | |
| operations | Yes | Array of diff operations to apply. Each operation must have a "type" field and relevant properties for that operation type. | |
| validateOnly | No | If true, only validate operations without applying them | |
| continueOnError | No | If true, apply valid operations even if some fail (best-effort mode). Returns applied and failed operation indices. Default: false (atomic) |
Implementation Reference
- src/mcp/handlers-workflow-diff.ts:73-426 (handler)Main execution handler for the n8n_update_partial_workflow tool. Processes input arguments, applies incremental diff operations to workflow (addNode, removeNode, updateNode, connections, etc.), handles validation, backups, n8n API updates, and telemetry tracking.export async function handleUpdatePartialWorkflow( args: unknown, repository: NodeRepository, context?: InstanceContext ): Promise<McpToolResponse> { const startTime = Date.now(); const sessionId = `mutation_${Date.now()}_${Math.random().toString(36).slice(2, 11)}`; let workflowBefore: any = null; let validationBefore: any = null; let validationAfter: any = null; try { // Debug logging (only in debug mode) if (process.env.DEBUG_MCP === 'true') { logger.debug('Workflow diff request received', { argsType: typeof args, hasWorkflowId: args && typeof args === 'object' && 'workflowId' in args, operationCount: args && typeof args === 'object' && 'operations' in args ? (args as any).operations?.length : 0 }); } // Validate input const input = workflowDiffSchema.parse(args); // Get API client const client = getN8nApiClient(context); if (!client) { return { success: false, error: 'n8n API not configured. Please set N8N_API_URL and N8N_API_KEY environment variables.' }; } // Fetch current workflow let workflow; try { workflow = await client.getWorkflow(input.id); // Store original workflow for telemetry workflowBefore = JSON.parse(JSON.stringify(workflow)); // Validate workflow BEFORE mutation (for telemetry) try { const validator = getValidator(repository); validationBefore = await validator.validateWorkflow(workflowBefore, { validateNodes: true, validateConnections: true, validateExpressions: true, profile: 'runtime' }); } catch (validationError) { logger.debug('Pre-mutation validation failed (non-blocking):', validationError); // Don't block mutation on validation errors validationBefore = { valid: false, errors: [{ type: 'validation_error', message: 'Validation failed' }] }; } } catch (error) { if (error instanceof N8nApiError) { return { success: false, error: getUserFriendlyErrorMessage(error), code: error.code }; } throw error; } // Create backup before modifying workflow (default: true) if (input.createBackup !== false && !input.validateOnly) { try { const versioningService = new WorkflowVersioningService(repository, client); const backupResult = await versioningService.createBackup(input.id, workflow, { trigger: 'partial_update', operations: input.operations }); logger.info('Workflow backup created', { workflowId: input.id, versionId: backupResult.versionId, versionNumber: backupResult.versionNumber, pruned: backupResult.pruned }); } catch (error: any) { logger.warn('Failed to create workflow backup', { workflowId: input.id, error: error.message }); // Continue with update even if backup fails (non-blocking) } } // Apply diff operations const diffEngine = new WorkflowDiffEngine(); const diffRequest = input as WorkflowDiffRequest; const diffResult = await diffEngine.applyDiff(workflow, diffRequest); // Check if this is a complete failure or partial success in continueOnError mode if (!diffResult.success) { // In continueOnError mode, partial success is still valuable if (diffRequest.continueOnError && diffResult.workflow && diffResult.operationsApplied && diffResult.operationsApplied > 0) { logger.info(`continueOnError mode: Applying ${diffResult.operationsApplied} successful operations despite ${diffResult.failed?.length || 0} failures`); // Continue to update workflow with partial changes } else { // Complete failure - return error return { success: false, error: 'Failed to apply diff operations', details: { errors: diffResult.errors, warnings: diffResult.warnings, operationsApplied: diffResult.operationsApplied, applied: diffResult.applied, failed: diffResult.failed } }; } } // If validateOnly, return validation result if (input.validateOnly) { return { success: true, message: diffResult.message, data: { valid: true, operationsToApply: input.operations.length }, details: { warnings: diffResult.warnings } }; } // Validate final workflow structure after applying all operations // This prevents creating workflows that pass operation-level validation // but fail workflow-level validation (e.g., UI can't render them) // // Validation can be skipped for specific integration tests that need to test // n8n API behavior with edge case workflows by setting SKIP_WORKFLOW_VALIDATION=true if (diffResult.workflow) { const structureErrors = validateWorkflowStructure(diffResult.workflow); if (structureErrors.length > 0) { const skipValidation = process.env.SKIP_WORKFLOW_VALIDATION === 'true'; logger.warn('Workflow structure validation failed after applying diff operations', { workflowId: input.id, errors: structureErrors, blocking: !skipValidation }); // Analyze error types to provide targeted recovery guidance const errorTypes = new Set<string>(); structureErrors.forEach(err => { if (err.includes('operator') || err.includes('singleValue')) errorTypes.add('operator_issues'); if (err.includes('connection') || err.includes('referenced')) errorTypes.add('connection_issues'); if (err.includes('Missing') || err.includes('missing')) errorTypes.add('missing_metadata'); if (err.includes('branch') || err.includes('output')) errorTypes.add('branch_mismatch'); }); // Build recovery guidance based on error types const recoverySteps = []; if (errorTypes.has('operator_issues')) { recoverySteps.push('Operator structure issue detected. Use validate_node_operation to check specific nodes.'); recoverySteps.push('Binary operators (equals, contains, greaterThan, etc.) must NOT have singleValue:true'); recoverySteps.push('Unary operators (isEmpty, isNotEmpty, true, false) REQUIRE singleValue:true'); } if (errorTypes.has('connection_issues')) { recoverySteps.push('Connection validation failed. Check all node connections reference existing nodes.'); recoverySteps.push('Use cleanStaleConnections operation to remove connections to non-existent nodes.'); } if (errorTypes.has('missing_metadata')) { recoverySteps.push('Missing metadata detected. Ensure filter-based nodes (IF v2.2+, Switch v3.2+) have complete conditions.options.'); recoverySteps.push('Required options: {version: 2, leftValue: "", caseSensitive: true, typeValidation: "strict"}'); } if (errorTypes.has('branch_mismatch')) { recoverySteps.push('Branch count mismatch. Ensure Switch nodes have outputs for all rules (e.g., 3 rules = 3 output branches).'); } // Add generic recovery steps if no specific guidance if (recoverySteps.length === 0) { recoverySteps.push('Review the validation errors listed above'); recoverySteps.push('Fix issues using updateNode or cleanStaleConnections operations'); recoverySteps.push('Run validate_workflow again to verify fixes'); } const errorMessage = structureErrors.length === 1 ? `Workflow validation failed: ${structureErrors[0]}` : `Workflow validation failed with ${structureErrors.length} structural issues`; // If validation is not skipped, return error and block the save if (!skipValidation) { return { success: false, error: errorMessage, details: { errors: structureErrors, errorCount: structureErrors.length, operationsApplied: diffResult.operationsApplied, applied: diffResult.applied, recoveryGuidance: recoverySteps, note: 'Operations were applied but created an invalid workflow structure. The workflow was NOT saved to n8n to prevent UI rendering errors.', autoSanitizationNote: 'Auto-sanitization runs on all nodes during updates to fix operator structures and add missing metadata. However, it cannot fix all issues (e.g., broken connections, branch mismatches). Use the recovery guidance above to resolve remaining issues.' } }; } // Validation skipped: log warning but continue (for specific integration tests) logger.info('Workflow validation skipped (SKIP_WORKFLOW_VALIDATION=true): Allowing workflow with validation warnings to proceed', { workflowId: input.id, warningCount: structureErrors.length }); } } // Update workflow via API try { const updatedWorkflow = await client.updateWorkflow(input.id, diffResult.workflow!); // Handle activation/deactivation if requested let finalWorkflow = updatedWorkflow; let activationMessage = ''; // Validate workflow AFTER mutation (for telemetry) try { const validator = getValidator(repository); validationAfter = await validator.validateWorkflow(finalWorkflow, { validateNodes: true, validateConnections: true, validateExpressions: true, profile: 'runtime' }); } catch (validationError) { logger.debug('Post-mutation validation failed (non-blocking):', validationError); // Don't block on validation errors validationAfter = { valid: false, errors: [{ type: 'validation_error', message: 'Validation failed' }] }; } if (diffResult.shouldActivate) { try { finalWorkflow = await client.activateWorkflow(input.id); activationMessage = ' Workflow activated.'; } catch (activationError) { logger.error('Failed to activate workflow after update', activationError); return { success: false, error: 'Workflow updated successfully but activation failed', details: { workflowUpdated: true, activationError: activationError instanceof Error ? activationError.message : 'Unknown error' } }; } } else if (diffResult.shouldDeactivate) { try { finalWorkflow = await client.deactivateWorkflow(input.id); activationMessage = ' Workflow deactivated.'; } catch (deactivationError) { logger.error('Failed to deactivate workflow after update', deactivationError); return { success: false, error: 'Workflow updated successfully but deactivation failed', details: { workflowUpdated: true, deactivationError: deactivationError instanceof Error ? deactivationError.message : 'Unknown error' } }; } } // Track successful mutation if (workflowBefore && !input.validateOnly) { trackWorkflowMutation({ sessionId, toolName: 'n8n_update_partial_workflow', userIntent: input.intent || 'Partial workflow update', operations: input.operations, workflowBefore, workflowAfter: finalWorkflow, validationBefore, validationAfter, mutationSuccess: true, durationMs: Date.now() - startTime, }).catch(err => { logger.debug('Failed to track mutation telemetry:', err); }); } return { success: true, data: { id: finalWorkflow.id, name: finalWorkflow.name, active: finalWorkflow.active, nodeCount: finalWorkflow.nodes?.length || 0, operationsApplied: diffResult.operationsApplied }, message: `Workflow "${finalWorkflow.name}" updated successfully. Applied ${diffResult.operationsApplied} operations.${activationMessage} Use n8n_get_workflow with mode 'structure' to verify current state.`, details: { applied: diffResult.applied, failed: diffResult.failed, errors: diffResult.errors, warnings: diffResult.warnings } }; } catch (error) { // Track failed mutation if (workflowBefore && !input.validateOnly) { trackWorkflowMutation({ sessionId, toolName: 'n8n_update_partial_workflow', userIntent: input.intent || 'Partial workflow update', operations: input.operations, workflowBefore, workflowAfter: workflowBefore, // No change since it failed validationBefore, validationAfter: validationBefore, // Same as before since mutation failed mutationSuccess: false, mutationError: error instanceof Error ? error.message : 'Unknown error', durationMs: Date.now() - startTime, }).catch(err => { logger.warn('Failed to track mutation telemetry for failed operation:', err); }); } if (error instanceof N8nApiError) { return { success: false, error: getUserFriendlyErrorMessage(error), code: error.code, details: error.details as Record<string, unknown> | undefined }; } throw error; } } catch (error) { if (error instanceof z.ZodError) { return { success: false, error: 'Invalid input', details: { errors: error.errors } }; } logger.error('Failed to update partial workflow', error); return { success: false, error: error instanceof Error ? error.message : 'Unknown error occurred' }; } }
- src/mcp/tools-n8n-manager.ts:125-155 (schema)Tool schema definition including name, description, and inputSchema for validating parameters (id, operations array, validateOnly, continueOnError). Part of n8nManagementTools export.{ name: 'n8n_update_partial_workflow', description: `Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, moveNode, enable/disableNode, addConnection, removeConnection, updateSettings, updateName, add/removeTag. See tools_documentation("n8n_update_partial_workflow", "full") for details.`, inputSchema: { type: 'object', additionalProperties: true, // Allow any extra properties Claude Desktop might add properties: { id: { type: 'string', description: 'Workflow ID to update' }, operations: { type: 'array', description: 'Array of diff operations to apply. Each operation must have a "type" field and relevant properties for that operation type.', items: { type: 'object', additionalProperties: true } }, validateOnly: { type: 'boolean', description: 'If true, only validate operations without applying them' }, continueOnError: { type: 'boolean', description: 'If true, apply valid operations even if some fail (best-effort mode). Returns applied and failed operation indices. Default: false (atomic)' } }, required: ['id', 'operations'] } },
- src/mcp/tools-n8n-manager.ts:9-9 (registration)The tool is registered/defined as part of the n8nManagementTools array export, which is imported and used by the MCP server to register the tool with its schema and handler mapping.export const n8nManagementTools: ToolDefinition[] = [
- Supporting helpers: Zod schema validation, cached validator, intent inference from operations, telemetry tracking.const workflowDiffSchema = z.object({ id: z.string(), operations: z.array(z.object({ type: z.string(), description: z.string().optional(), // Node operations node: z.any().optional(), nodeId: z.string().optional(), nodeName: z.string().optional(), updates: z.any().optional(), position: z.tuple([z.number(), z.number()]).optional(), // Connection operations source: z.string().optional(), target: z.string().optional(), from: z.string().optional(), // For rewireConnection to: z.string().optional(), // For rewireConnection sourceOutput: z.string().optional(), targetInput: z.string().optional(), sourceIndex: z.number().optional(), targetIndex: z.number().optional(), // Smart parameters (Phase 1 UX improvement) branch: z.enum(['true', 'false']).optional(), case: z.number().optional(), ignoreErrors: z.boolean().optional(), // Connection cleanup operations dryRun: z.boolean().optional(), connections: z.any().optional(), // Metadata operations settings: z.any().optional(), name: z.string().optional(), tag: z.string().optional(), })), validateOnly: z.boolean().optional(), continueOnError: z.boolean().optional(), createBackup: z.boolean().optional(), intent: z.string().optional(), }); export async function handleUpdatePartialWorkflow( args: unknown, repository: NodeRepository, context?: InstanceContext ): Promise<McpToolResponse> { const startTime = Date.now(); const sessionId = `mutation_${Date.now()}_${Math.random().toString(36).slice(2, 11)}`; let workflowBefore: any = null; let validationBefore: any = null; let validationAfter: any = null; try { // Debug logging (only in debug mode) if (process.env.DEBUG_MCP === 'true') { logger.debug('Workflow diff request received', { argsType: typeof args, hasWorkflowId: args && typeof args === 'object' && 'workflowId' in args, operationCount: args && typeof args === 'object' && 'operations' in args ? (args as any).operations?.length : 0 }); } // Validate input const input = workflowDiffSchema.parse(args); // Get API client const client = getN8nApiClient(context); if (!client) { return { success: false, error: 'n8n API not configured. Please set N8N_API_URL and N8N_API_KEY environment variables.' }; } // Fetch current workflow let workflow; try { workflow = await client.getWorkflow(input.id); // Store original workflow for telemetry workflowBefore = JSON.parse(JSON.stringify(workflow)); // Validate workflow BEFORE mutation (for telemetry) try { const validator = getValidator(repository); validationBefore = await validator.validateWorkflow(workflowBefore, { validateNodes: true, validateConnections: true, validateExpressions: true, profile: 'runtime' }); } catch (validationError) { logger.debug('Pre-mutation validation failed (non-blocking):', validationError); // Don't block mutation on validation errors validationBefore = { valid: false, errors: [{ type: 'validation_error', message: 'Validation failed' }] }; } } catch (error) { if (error instanceof N8nApiError) { return { success: false, error: getUserFriendlyErrorMessage(error), code: error.code }; } throw error; } // Create backup before modifying workflow (default: true) if (input.createBackup !== false && !input.validateOnly) { try { const versioningService = new WorkflowVersioningService(repository, client); const backupResult = await versioningService.createBackup(input.id, workflow, { trigger: 'partial_update', operations: input.operations }); logger.info('Workflow backup created', { workflowId: input.id, versionId: backupResult.versionId, versionNumber: backupResult.versionNumber, pruned: backupResult.pruned }); } catch (error: any) { logger.warn('Failed to create workflow backup', { workflowId: input.id, error: error.message }); // Continue with update even if backup fails (non-blocking) } } // Apply diff operations const diffEngine = new WorkflowDiffEngine(); const diffRequest = input as WorkflowDiffRequest; const diffResult = await diffEngine.applyDiff(workflow, diffRequest); // Check if this is a complete failure or partial success in continueOnError mode if (!diffResult.success) { // In continueOnError mode, partial success is still valuable if (diffRequest.continueOnError && diffResult.workflow && diffResult.operationsApplied && diffResult.operationsApplied > 0) { logger.info(`continueOnError mode: Applying ${diffResult.operationsApplied} successful operations despite ${diffResult.failed?.length || 0} failures`); // Continue to update workflow with partial changes } else { // Complete failure - return error return { success: false, error: 'Failed to apply diff operations', details: { errors: diffResult.errors, warnings: diffResult.warnings, operationsApplied: diffResult.operationsApplied, applied: diffResult.applied, failed: diffResult.failed } }; } } // If validateOnly, return validation result if (input.validateOnly) { return { success: true, message: diffResult.message, data: { valid: true, operationsToApply: input.operations.length }, details: { warnings: diffResult.warnings } }; } // Validate final workflow structure after applying all operations // This prevents creating workflows that pass operation-level validation // but fail workflow-level validation (e.g., UI can't render them) // // Validation can be skipped for specific integration tests that need to test // n8n API behavior with edge case workflows by setting SKIP_WORKFLOW_VALIDATION=true if (diffResult.workflow) { const structureErrors = validateWorkflowStructure(diffResult.workflow); if (structureErrors.length > 0) { const skipValidation = process.env.SKIP_WORKFLOW_VALIDATION === 'true'; logger.warn('Workflow structure validation failed after applying diff operations', { workflowId: input.id, errors: structureErrors, blocking: !skipValidation }); // Analyze error types to provide targeted recovery guidance const errorTypes = new Set<string>(); structureErrors.forEach(err => { if (err.includes('operator') || err.includes('singleValue')) errorTypes.add('operator_issues'); if (err.includes('connection') || err.includes('referenced')) errorTypes.add('connection_issues'); if (err.includes('Missing') || err.includes('missing')) errorTypes.add('missing_metadata'); if (err.includes('branch') || err.includes('output')) errorTypes.add('branch_mismatch'); }); // Build recovery guidance based on error types const recoverySteps = []; if (errorTypes.has('operator_issues')) { recoverySteps.push('Operator structure issue detected. Use validate_node_operation to check specific nodes.'); recoverySteps.push('Binary operators (equals, contains, greaterThan, etc.) must NOT have singleValue:true'); recoverySteps.push('Unary operators (isEmpty, isNotEmpty, true, false) REQUIRE singleValue:true'); } if (errorTypes.has('connection_issues')) { recoverySteps.push('Connection validation failed. Check all node connections reference existing nodes.'); recoverySteps.push('Use cleanStaleConnections operation to remove connections to non-existent nodes.'); } if (errorTypes.has('missing_metadata')) { recoverySteps.push('Missing metadata detected. Ensure filter-based nodes (IF v2.2+, Switch v3.2+) have complete conditions.options.'); recoverySteps.push('Required options: {version: 2, leftValue: "", caseSensitive: true, typeValidation: "strict"}'); } if (errorTypes.has('branch_mismatch')) { recoverySteps.push('Branch count mismatch. Ensure Switch nodes have outputs for all rules (e.g., 3 rules = 3 output branches).'); } // Add generic recovery steps if no specific guidance if (recoverySteps.length === 0) { recoverySteps.push('Review the validation errors listed above'); recoverySteps.push('Fix issues using updateNode or cleanStaleConnections operations'); recoverySteps.push('Run validate_workflow again to verify fixes'); } const errorMessage = structureErrors.length === 1 ? `Workflow validation failed: ${structureErrors[0]}` : `Workflow validation failed with ${structureErrors.length} structural issues`; // If validation is not skipped, return error and block the save if (!skipValidation) { return { success: false, error: errorMessage, details: { errors: structureErrors, errorCount: structureErrors.length, operationsApplied: diffResult.operationsApplied, applied: diffResult.applied, recoveryGuidance: recoverySteps, note: 'Operations were applied but created an invalid workflow structure. The workflow was NOT saved to n8n to prevent UI rendering errors.', autoSanitizationNote: 'Auto-sanitization runs on all nodes during updates to fix operator structures and add missing metadata. However, it cannot fix all issues (e.g., broken connections, branch mismatches). Use the recovery guidance above to resolve remaining issues.' } }; } // Validation skipped: log warning but continue (for specific integration tests) logger.info('Workflow validation skipped (SKIP_WORKFLOW_VALIDATION=true): Allowing workflow with validation warnings to proceed', { workflowId: input.id, warningCount: structureErrors.length }); } } // Update workflow via API try { const updatedWorkflow = await client.updateWorkflow(input.id, diffResult.workflow!); // Handle activation/deactivation if requested let finalWorkflow = updatedWorkflow; let activationMessage = ''; // Validate workflow AFTER mutation (for telemetry) try { const validator = getValidator(repository); validationAfter = await validator.validateWorkflow(finalWorkflow, { validateNodes: true, validateConnections: true, validateExpressions: true, profile: 'runtime' }); } catch (validationError) { logger.debug('Post-mutation validation failed (non-blocking):', validationError); // Don't block on validation errors validationAfter = { valid: false, errors: [{ type: 'validation_error', message: 'Validation failed' }] }; } if (diffResult.shouldActivate) { try { finalWorkflow = await client.activateWorkflow(input.id); activationMessage = ' Workflow activated.'; } catch (activationError) { logger.error('Failed to activate workflow after update', activationError); return { success: false, error: 'Workflow updated successfully but activation failed', details: { workflowUpdated: true, activationError: activationError instanceof Error ? activationError.message : 'Unknown error' } }; } } else if (diffResult.shouldDeactivate) { try { finalWorkflow = await client.deactivateWorkflow(input.id); activationMessage = ' Workflow deactivated.'; } catch (deactivationError) { logger.error('Failed to deactivate workflow after update', deactivationError); return { success: false, error: 'Workflow updated successfully but deactivation failed', details: { workflowUpdated: true, deactivationError: deactivationError instanceof Error ? deactivationError.message : 'Unknown error' } }; } } // Track successful mutation if (workflowBefore && !input.validateOnly) { trackWorkflowMutation({ sessionId, toolName: 'n8n_update_partial_workflow', userIntent: input.intent || 'Partial workflow update', operations: input.operations, workflowBefore, workflowAfter: finalWorkflow, validationBefore, validationAfter, mutationSuccess: true, durationMs: Date.now() - startTime, }).catch(err => { logger.debug('Failed to track mutation telemetry:', err); }); } return { success: true, data: { id: finalWorkflow.id, name: finalWorkflow.name, active: finalWorkflow.active, nodeCount: finalWorkflow.nodes?.length || 0, operationsApplied: diffResult.operationsApplied }, message: `Workflow "${finalWorkflow.name}" updated successfully. Applied ${diffResult.operationsApplied} operations.${activationMessage} Use n8n_get_workflow with mode 'structure' to verify current state.`, details: { applied: diffResult.applied, failed: diffResult.failed, errors: diffResult.errors, warnings: diffResult.warnings } }; } catch (error) { // Track failed mutation if (workflowBefore && !input.validateOnly) { trackWorkflowMutation({ sessionId, toolName: 'n8n_update_partial_workflow', userIntent: input.intent || 'Partial workflow update', operations: input.operations, workflowBefore, workflowAfter: workflowBefore, // No change since it failed validationBefore, validationAfter: validationBefore, // Same as before since mutation failed mutationSuccess: false, mutationError: error instanceof Error ? error.message : 'Unknown error', durationMs: Date.now() - startTime, }).catch(err => { logger.warn('Failed to track mutation telemetry for failed operation:', err); }); } if (error instanceof N8nApiError) { return { success: false, error: getUserFriendlyErrorMessage(error), code: error.code, details: error.details as Record<string, unknown> | undefined }; } throw error; } } catch (error) { if (error instanceof z.ZodError) { return { success: false, error: 'Invalid input', details: { errors: error.errors } }; } logger.error('Failed to update partial workflow', error); return { success: false, error: error instanceof Error ? error.message : 'Unknown error occurred' }; } } /** * Infer intent from operations when not explicitly provided */ function inferIntentFromOperations(operations: any[]): string { if (!operations || operations.length === 0) { return 'Partial workflow update'; } const opTypes = operations.map((op) => op.type); const opCount = operations.length; // Single operation - be specific if (opCount === 1) { const op = operations[0]; switch (op.type) { case 'addNode': return `Add ${op.node?.type || 'node'}`; case 'removeNode': return `Remove node ${op.nodeName || op.nodeId || ''}`.trim(); case 'updateNode': return `Update node ${op.nodeName || op.nodeId || ''}`.trim(); case 'addConnection': return `Connect ${op.source || 'node'} to ${op.target || 'node'}`; case 'removeConnection': return `Disconnect ${op.source || 'node'} from ${op.target || 'node'}`; case 'rewireConnection': return `Rewire ${op.source || 'node'} from ${op.from || ''} to ${op.to || ''}`.trim(); case 'updateName': return `Rename workflow to "${op.name || ''}"`; case 'activateWorkflow': return 'Activate workflow'; case 'deactivateWorkflow': return 'Deactivate workflow'; default: return `Workflow ${op.type}`; } } // Multiple operations - summarize pattern const typeSet = new Set(opTypes); const summary: string[] = []; if (typeSet.has('addNode')) { const count = opTypes.filter((t) => t === 'addNode').length; summary.push(`add ${count} node${count > 1 ? 's' : ''}`); } if (typeSet.has('removeNode')) { const count = opTypes.filter((t) => t === 'removeNode').length; summary.push(`remove ${count} node${count > 1 ? 's' : ''}`); } if (typeSet.has('updateNode')) { const count = opTypes.filter((t) => t === 'updateNode').length; summary.push(`update ${count} node${count > 1 ? 's' : ''}`); } if (typeSet.has('addConnection') || typeSet.has('rewireConnection')) { summary.push('modify connections'); } if (typeSet.has('updateName') || typeSet.has('updateSettings')) { summary.push('update metadata'); } return summary.length > 0 ? `Workflow update: ${summary.join(', ')}` : `Workflow update: ${opCount} operations`; } /** * Track workflow mutation for telemetry */ async function trackWorkflowMutation(data: any): Promise<void> { try { // Enhance intent if it's missing or generic if ( !data.userIntent || data.userIntent === 'Partial workflow update' || data.userIntent.length < 10 ) { data.userIntent = inferIntentFromOperations(data.operations); } const { telemetry } = await import('../telemetry/telemetry-manager.js'); await telemetry.trackWorkflowMutation(data); } catch (error) { logger.debug('Telemetry tracking failed:', error); } }
- Extended tool documentation including full parameter descriptions, examples for operations, AI connections, best practices, pitfalls, and use cases.import { ToolDocumentation } from '../types'; export const n8nUpdatePartialWorkflowDoc: ToolDocumentation = { name: 'n8n_update_partial_workflow', category: 'workflow_management', essentials: { description: 'Update workflow incrementally with diff operations. Types: addNode, removeNode, updateNode, moveNode, enable/disableNode, addConnection, removeConnection, rewireConnection, cleanStaleConnections, replaceConnections, updateSettings, updateName, add/removeTag, activateWorkflow, deactivateWorkflow. Supports smart parameters (branch, case) for multi-output nodes. Full support for AI connections (ai_languageModel, ai_tool, ai_memory, ai_embedding, ai_vectorStore, ai_document, ai_textSplitter, ai_outputParser).', keyParameters: ['id', 'operations', 'continueOnError'], example: 'n8n_update_partial_workflow({id: "wf_123", operations: [{type: "rewireConnection", source: "IF", from: "Old", to: "New", branch: "true"}]})', performance: 'Fast (50-200ms)', tips: [ 'ALWAYS provide intent parameter describing what you\'re doing (e.g., "Add error handling", "Fix webhook URL", "Connect Slack to error output")', 'DON\'T use generic intent like "update workflow" or "partial update" - be specific about your goal', 'Use rewireConnection to change connection targets', 'Use branch="true"/"false" for IF nodes', 'Use case=N for Switch nodes', 'Use cleanStaleConnections to auto-remove broken connections', 'Set ignoreErrors:true on removeConnection for cleanup', 'Use continueOnError mode for best-effort bulk operations', 'Validate with validateOnly first', 'For AI connections, specify sourceOutput type (ai_languageModel, ai_tool, etc.)', 'Batch AI component connections for atomic updates', 'Auto-sanitization: ALL nodes auto-fixed during updates (operator structures, missing metadata)', 'Node renames automatically update all connection references - no manual connection operations needed', 'Activate/deactivate workflows: Use activateWorkflow/deactivateWorkflow operations (requires activatable triggers like webhook/schedule)' ] }, full: { description: `Updates workflows using surgical diff operations instead of full replacement. Supports 17 operation types for precise modifications. Operations are validated and applied atomically by default - all succeed or none are applied. ## Available Operations: ### Node Operations (6 types): - **addNode**: Add a new node with name, type, and position (required) - **removeNode**: Remove a node by ID or name - **updateNode**: Update node properties using dot notation (e.g., 'parameters.url') - **moveNode**: Change node position [x, y] - **enableNode**: Enable a disabled node - **disableNode**: Disable an active node ### Connection Operations (5 types): - **addConnection**: Connect nodes (source→target). Supports smart parameters: branch="true"/"false" for IF nodes, case=N for Switch nodes. - **removeConnection**: Remove connection between nodes (supports ignoreErrors flag) - **rewireConnection**: Change connection target from one node to another. Supports smart parameters. - **cleanStaleConnections**: Auto-remove all connections referencing non-existent nodes - **replaceConnections**: Replace entire connections object ### Metadata Operations (4 types): - **updateSettings**: Modify workflow settings - **updateName**: Rename the workflow - **addTag**: Add a workflow tag - **removeTag**: Remove a workflow tag ### Workflow Activation Operations (2 types): - **activateWorkflow**: Activate the workflow to enable automatic execution via triggers - **deactivateWorkflow**: Deactivate the workflow to prevent automatic execution ## Smart Parameters for Multi-Output Nodes For **IF nodes**, use semantic 'branch' parameter instead of technical sourceIndex: - **branch="true"**: Routes to true branch (sourceIndex=0) - **branch="false"**: Routes to false branch (sourceIndex=1) For **Switch nodes**, use semantic 'case' parameter: - **case=0**: First output - **case=1**: Second output - **case=N**: Nth output Works with addConnection and rewireConnection operations. Explicit sourceIndex overrides smart parameters. ## AI Connection Support Full support for all 8 AI connection types used in n8n AI workflows: **Connection Types**: - **ai_languageModel**: Connect language models (OpenAI, Anthropic, Google Gemini) to AI Agents - **ai_tool**: Connect tools (HTTP Request Tool, Code Tool, etc.) to AI Agents - **ai_memory**: Connect memory systems (Window Buffer, Conversation Summary) to AI Agents - **ai_outputParser**: Connect output parsers (Structured, JSON) to AI Agents - **ai_embedding**: Connect embedding models to Vector Stores - **ai_vectorStore**: Connect vector stores to Vector Store Tools - **ai_document**: Connect document loaders to Vector Stores - **ai_textSplitter**: Connect text splitters to document processing chains **AI Connection Examples**: - Single connection: \`{type: "addConnection", source: "OpenAI", target: "AI Agent", sourceOutput: "ai_languageModel"}\` - Fallback model: Use targetIndex (0=primary, 1=fallback) for dual language model setup - Multiple tools: Batch multiple \`sourceOutput: "ai_tool"\` connections to one AI Agent - Vector retrieval: Chain ai_embedding → ai_vectorStore → ai_tool → AI Agent **Important Notes**: - **AI nodes do NOT require main connections**: Nodes like OpenAI Chat Model, Postgres Chat Memory, Embeddings OpenAI, and Supabase Vector Store use AI-specific connection types exclusively. They should ONLY have connections like \`ai_languageModel\`, \`ai_memory\`, \`ai_embedding\`, or \`ai_tool\` - NOT \`main\` connections. **Best Practices**: - Always specify \`sourceOutput\` for AI connections (defaults to "main" if omitted) - Connect language model BEFORE creating/enabling AI Agent (validation requirement) - Use atomic mode (default) when setting up AI workflows to ensure complete configuration - Validate AI workflows after changes with \`n8n_validate_workflow\` tool ## Cleanup & Recovery Features ### Automatic Cleanup The **cleanStaleConnections** operation automatically removes broken connection references after node renames/deletions. Essential for workflow recovery. ### Best-Effort Mode Set **continueOnError: true** to apply valid operations even if some fail. Returns detailed results showing which operations succeeded/failed. Perfect for bulk cleanup operations. ### Graceful Error Handling Add **ignoreErrors: true** to removeConnection operations to prevent failures when connections don't exist. ## Auto-Sanitization System ### What Gets Auto-Fixed When ANY workflow update is made, ALL nodes in the workflow are automatically sanitized to ensure complete metadata and correct structure: 1. **Operator Structure Fixes**: - Binary operators (equals, contains, greaterThan, etc.) automatically have \`singleValue\` removed - Unary operators (isEmpty, isNotEmpty, true, false) automatically get \`singleValue: true\` added - Invalid operator structures (e.g., \`{type: "isNotEmpty"}\`) are corrected to \`{type: "boolean", operation: "isNotEmpty"}\` 2. **Missing Metadata Added**: - IF nodes with conditions get complete \`conditions.options\` structure if missing - Switch nodes with conditions get complete \`conditions.options\` for all rules - Required fields: \`{version: 2, leftValue: "", caseSensitive: true, typeValidation: "strict"}\` ### Sanitization Scope - Runs on **ALL nodes** in the workflow, not just modified ones - Triggered by ANY update operation (addNode, updateNode, addConnection, etc.) - Prevents workflow corruption that would make UI unrenderable ### Limitations Auto-sanitization CANNOT fix: - Broken connections (connections referencing non-existent nodes) - use \`cleanStaleConnections\` - Branch count mismatches (e.g., Switch with 3 rules but only 2 outputs) - requires manual connection fixes - Workflows in paradoxical corrupt states (API returns corrupt data, API rejects updates) - must recreate workflow ### Recovery Guidance If validation still fails after auto-sanitization: 1. Check error details for specific issues 2. Use \`validate_workflow\` to see all validation errors 3. For connection issues, use \`cleanStaleConnections\` operation 4. For branch mismatches, add missing output connections 5. For paradoxical corrupted workflows, create new workflow and migrate nodes ## Automatic Connection Reference Updates When you rename a node using **updateNode**, all connection references throughout the workflow are automatically updated. Both the connection source keys and target references are updated for all connection types (main, error, ai_tool, ai_languageModel, ai_memory, etc.) and all branch configurations (IF node branches, Switch node cases, error outputs). ### Basic Example \`\`\`javascript // Rename a node - connections update automatically n8n_update_partial_workflow({ id: "wf_123", operations: [{ type: "updateNode", nodeId: "node_abc", updates: { name: "Data Processor" } }] }); // All incoming and outgoing connections now reference "Data Processor" \`\`\` ### Multi-Output Node Example \`\`\`javascript // Rename nodes in a branching workflow n8n_update_partial_workflow({ id: "workflow_id", operations: [ { type: "updateNode", nodeId: "if_node_id", updates: { name: "Value Checker" } }, { type: "updateNode", nodeId: "error_node_id", updates: { name: "Error Handler" } } ] }); // IF node branches and error connections automatically updated \`\`\` ### Name Collision Protection Attempting to rename a node to an existing name returns a clear error: \`\`\` Cannot rename node "Old Name" to "New Name": A node with that name already exists (id: abc123...). Please choose a different name. \`\`\` ### Usage Notes - Simply rename nodes with updateNode - no manual connection operations needed - Multiple renames in one call work atomically - Can rename a node and add/remove connections using the new name in the same batch - Use \`validateOnly: true\` to preview effects before applying ## Removing Properties with undefined To remove a property from a node, set its value to \`undefined\` in the updates object. This is essential when migrating from deprecated properties or cleaning up optional configuration fields. ### Why Use undefined? - **Property removal vs. null**: Setting a property to \`undefined\` removes it completely from the node object, while \`null\` sets the property to a null value - **Validation constraints**: Some properties are mutually exclusive (e.g., \`continueOnFail\` and \`onError\`). Simply setting one without removing the other will fail validation - **Deprecated property migration**: When n8n deprecates properties, you must remove the old property before the new one will work ### Basic Property Removal \`\`\`javascript // Remove error handling configuration n8n_update_partial_workflow({ id: "wf_123", operations: [{ type: "updateNode", nodeName: "HTTP Request", updates: { onError: undefined } }] }); // Remove disabled flag n8n_update_partial_workflow({ id: "wf_456", operations: [{ type: "updateNode", nodeId: "node_abc", updates: { disabled: undefined } }] }); \`\`\` ### Nested Property Removal Use dot notation to remove nested properties: \`\`\`javascript // Remove nested parameter n8n_update_partial_workflow({ id: "wf_789", operations: [{ type: "updateNode", nodeName: "API Request", updates: { "parameters.authentication": undefined } }] }); // Remove entire array property n8n_update_partial_workflow({ id: "wf_012", operations: [{ type: "updateNode", nodeName: "HTTP Request", updates: { "parameters.headers": undefined } }] }); \`\`\` ### Migrating from Deprecated Properties Common scenario: replacing \`continueOnFail\` with \`onError\`: \`\`\`javascript // WRONG: Setting only the new property leaves the old one n8n_update_partial_workflow({ id: "wf_123", operations: [{ type: "updateNode", nodeName: "HTTP Request", updates: { onError: "continueErrorOutput" } }] }); // Error: continueOnFail and onError are mutually exclusive // CORRECT: Remove the old property first n8n_update_partial_workflow({ id: "wf_123", operations: [{ type: "updateNode", nodeName: "HTTP Request", updates: { continueOnFail: undefined, onError: "continueErrorOutput" } }] }); \`\`\` ### Batch Property Removal Remove multiple properties in one operation: \`\`\`javascript n8n_update_partial_workflow({ id: "wf_345", operations: [{ type: "updateNode", nodeName: "Data Processor", updates: { continueOnFail: undefined, alwaysOutputData: undefined, "parameters.legacy_option": undefined } }] }); \`\`\` ### When to Use undefined - Removing deprecated properties during migration - Cleaning up optional configuration flags - Resolving mutual exclusivity validation errors - Removing stale or unnecessary node metadata - Simplifying node configuration`, parameters: { id: { type: 'string', required: true, description: 'Workflow ID to update' }, operations: { type: 'array', required: true, description: 'Array of diff operations. Each must have "type" field and operation-specific properties. Nodes can be referenced by ID or name.' }, validateOnly: { type: 'boolean', description: 'If true, only validate operations without applying them' }, continueOnError: { type: 'boolean', description: 'If true, apply valid operations even if some fail (best-effort mode). Returns applied and failed operation indices. Default: false (atomic)' }, intent: { type: 'string', description: 'Intent of the change - helps to return better response. Include in every tool call. Example: "Add error handling for API failures".' } }, returns: 'Minimal summary (id, name, active, nodeCount, operationsApplied) for token efficiency. Use n8n_get_workflow with mode "structure" to verify current state if needed. Returns validation results if validateOnly=true.', examples: [ '// Include intent parameter for better responses\nn8n_update_partial_workflow({id: "abc", intent: "Add error handling for API failures", operations: [{type: "addConnection", source: "HTTP Request", target: "Error Handler"}]})', '// Add a basic node (minimal configuration)\nn8n_update_partial_workflow({id: "abc", operations: [{type: "addNode", node: {name: "Process Data", type: "n8n-nodes-base.set", position: [400, 300], parameters: {}}}]})', '// Add node with full configuration\nn8n_update_partial_workflow({id: "def", operations: [{type: "addNode", node: {name: "Send Slack Alert", type: "n8n-nodes-base.slack", position: [600, 300], typeVersion: 2, parameters: {resource: "message", operation: "post", channel: "#alerts", text: "Success!"}}}]})', '// Add node AND connect it (common pattern)\nn8n_update_partial_workflow({id: "ghi", operations: [\n {type: "addNode", node: {name: "HTTP Request", type: "n8n-nodes-base.httpRequest", position: [400, 300], parameters: {url: "https://api.example.com", method: "GET"}}},\n {type: "addConnection", source: "Webhook", target: "HTTP Request"}\n]})', '// Rewire connection from one target to another\nn8n_update_partial_workflow({id: "xyz", operations: [{type: "rewireConnection", source: "Webhook", from: "Old Handler", to: "New Handler"}]})', '// Smart parameter: IF node true branch\nn8n_update_partial_workflow({id: "abc", operations: [{type: "addConnection", source: "IF", target: "Success Handler", branch: "true"}]})', '// Smart parameter: IF node false branch\nn8n_update_partial_workflow({id: "def", operations: [{type: "addConnection", source: "IF", target: "Error Handler", branch: "false"}]})', '// Smart parameter: Switch node case routing\nn8n_update_partial_workflow({id: "ghi", operations: [\n {type: "addConnection", source: "Switch", target: "Handler A", case: 0},\n {type: "addConnection", source: "Switch", target: "Handler B", case: 1},\n {type: "addConnection", source: "Switch", target: "Handler C", case: 2}\n]})', '// Rewire with smart parameter\nn8n_update_partial_workflow({id: "jkl", operations: [{type: "rewireConnection", source: "IF", from: "Old True Handler", to: "New True Handler", branch: "true"}]})', '// Add multiple nodes in batch\nn8n_update_partial_workflow({id: "mno", operations: [\n {type: "addNode", node: {name: "Filter", type: "n8n-nodes-base.filter", position: [400, 300], parameters: {}}},\n {type: "addNode", node: {name: "Transform", type: "n8n-nodes-base.set", position: [600, 300], parameters: {}}},\n {type: "addConnection", source: "Filter", target: "Transform"}\n]})', '// Clean up stale connections after node renames/deletions\nn8n_update_partial_workflow({id: "pqr", operations: [{type: "cleanStaleConnections"}]})', '// Remove connection gracefully (no error if it doesn\'t exist)\nn8n_update_partial_workflow({id: "stu", operations: [{type: "removeConnection", source: "Old Node", target: "Target", ignoreErrors: true}]})', '// Best-effort mode: apply what works, report what fails\nn8n_update_partial_workflow({id: "vwx", operations: [\n {type: "updateName", name: "Fixed Workflow"},\n {type: "removeConnection", source: "Broken", target: "Node"},\n {type: "cleanStaleConnections"}\n], continueOnError: true})', '// Update node parameter\nn8n_update_partial_workflow({id: "yza", operations: [{type: "updateNode", nodeName: "HTTP Request", updates: {"parameters.url": "https://api.example.com"}}]})', '// Validate before applying\nn8n_update_partial_workflow({id: "bcd", operations: [{type: "removeNode", nodeName: "Old Process"}], validateOnly: true})', '\n// ============ AI CONNECTION EXAMPLES ============', '// Connect language model to AI Agent\nn8n_update_partial_workflow({id: "ai1", operations: [{type: "addConnection", source: "OpenAI Chat Model", target: "AI Agent", sourceOutput: "ai_languageModel"}]})', '// Connect tool to AI Agent\nn8n_update_partial_workflow({id: "ai2", operations: [{type: "addConnection", source: "HTTP Request Tool", target: "AI Agent", sourceOutput: "ai_tool"}]})', '// Connect memory to AI Agent\nn8n_update_partial_workflow({id: "ai3", operations: [{type: "addConnection", source: "Window Buffer Memory", target: "AI Agent", sourceOutput: "ai_memory"}]})', '// Connect output parser to AI Agent\nn8n_update_partial_workflow({id: "ai4", operations: [{type: "addConnection", source: "Structured Output Parser", target: "AI Agent", sourceOutput: "ai_outputParser"}]})', '// Complete AI Agent setup: Add language model, tools, and memory\nn8n_update_partial_workflow({id: "ai5", operations: [\n {type: "addConnection", source: "OpenAI Chat Model", target: "AI Agent", sourceOutput: "ai_languageModel"},\n {type: "addConnection", source: "HTTP Request Tool", target: "AI Agent", sourceOutput: "ai_tool"},\n {type: "addConnection", source: "Code Tool", target: "AI Agent", sourceOutput: "ai_tool"},\n {type: "addConnection", source: "Window Buffer Memory", target: "AI Agent", sourceOutput: "ai_memory"}\n]})', '// Add fallback model to AI Agent for reliability\nn8n_update_partial_workflow({id: "ai6", operations: [\n {type: "addConnection", source: "OpenAI Chat Model", target: "AI Agent", sourceOutput: "ai_languageModel", targetIndex: 0},\n {type: "addConnection", source: "Anthropic Chat Model", target: "AI Agent", sourceOutput: "ai_languageModel", targetIndex: 1}\n]})', '// Vector Store setup: Connect embeddings and documents\nn8n_update_partial_workflow({id: "ai7", operations: [\n {type: "addConnection", source: "Embeddings OpenAI", target: "Pinecone Vector Store", sourceOutput: "ai_embedding"},\n {type: "addConnection", source: "Default Data Loader", target: "Pinecone Vector Store", sourceOutput: "ai_document"}\n]})', '// Connect Vector Store Tool to AI Agent (retrieval setup)\nn8n_update_partial_workflow({id: "ai8", operations: [\n {type: "addConnection", source: "Pinecone Vector Store", target: "Vector Store Tool", sourceOutput: "ai_vectorStore"},\n {type: "addConnection", source: "Vector Store Tool", target: "AI Agent", sourceOutput: "ai_tool"}\n]})', '// Rewire AI Agent to use different language model\nn8n_update_partial_workflow({id: "ai9", operations: [{type: "rewireConnection", source: "AI Agent", from: "OpenAI Chat Model", to: "Anthropic Chat Model", sourceOutput: "ai_languageModel"}]})', '// Replace all AI tools for an agent\nn8n_update_partial_workflow({id: "ai10", operations: [\n {type: "removeConnection", source: "Old Tool 1", target: "AI Agent", sourceOutput: "ai_tool"},\n {type: "removeConnection", source: "Old Tool 2", target: "AI Agent", sourceOutput: "ai_tool"},\n {type: "addConnection", source: "New HTTP Tool", target: "AI Agent", sourceOutput: "ai_tool"},\n {type: "addConnection", source: "New Code Tool", target: "AI Agent", sourceOutput: "ai_tool"}\n]})', '\n// ============ REMOVING PROPERTIES EXAMPLES ============', '// Remove a simple property\nn8n_update_partial_workflow({id: "rm1", operations: [{type: "updateNode", nodeName: "HTTP Request", updates: {onError: undefined}}]})', '// Migrate from deprecated continueOnFail to onError\nn8n_update_partial_workflow({id: "rm2", operations: [{type: "updateNode", nodeName: "HTTP Request", updates: {continueOnFail: undefined, onError: "continueErrorOutput"}}]})', '// Remove nested property\nn8n_update_partial_workflow({id: "rm3", operations: [{type: "updateNode", nodeName: "API Request", updates: {"parameters.authentication": undefined}}]})', '// Remove multiple properties\nn8n_update_partial_workflow({id: "rm4", operations: [{type: "updateNode", nodeName: "Data Processor", updates: {continueOnFail: undefined, alwaysOutputData: undefined, "parameters.legacy_option": undefined}}]})', '// Remove entire array property\nn8n_update_partial_workflow({id: "rm5", operations: [{type: "updateNode", nodeName: "HTTP Request", updates: {"parameters.headers": undefined}}]})' ], useCases: [ 'Rewire connections when replacing nodes', 'Route IF/Switch node outputs with semantic parameters', 'Clean up broken workflows after node renames/deletions', 'Bulk connection cleanup with best-effort mode', 'Update single node parameters', 'Replace all connections at once', 'Graceful cleanup operations that don\'t fail', 'Enable/disable nodes', 'Rename workflows or nodes', 'Manage tags efficiently', 'Connect AI components (language models, tools, memory, parsers)', 'Set up AI Agent workflows with multiple tools', 'Add fallback language models to AI Agents', 'Configure Vector Store retrieval systems', 'Swap language models in existing AI workflows', 'Batch-update AI tool connections' ], performance: 'Very fast - typically 50-200ms. Much faster than full updates as only changes are processed.', bestPractices: [ 'Always include intent parameter with specific description (e.g., "Add error handling to HTTP Request node", "Fix authentication flow", "Connect Slack notification to errors"). Avoid generic phrases like "update workflow" or "partial update"', 'Use rewireConnection instead of remove+add for changing targets', 'Use branch="true"/"false" for IF nodes instead of sourceIndex', 'Use case=N for Switch nodes instead of sourceIndex', 'Use cleanStaleConnections after renaming/removing nodes', 'Use continueOnError for bulk cleanup operations', 'Set ignoreErrors:true on removeConnection for graceful cleanup', 'Use validateOnly to test operations before applying', 'Group related changes in one call', 'Check operation order for dependencies', 'Use atomic mode (default) for critical updates', 'For AI connections, always specify sourceOutput (ai_languageModel, ai_tool, ai_memory, etc.)', 'Connect language model BEFORE adding AI Agent to ensure validation passes', 'Use targetIndex for fallback models (primary=0, fallback=1)', 'Batch AI component connections in a single operation for atomicity', 'Validate AI workflows after connection changes to catch configuration errors', 'To remove properties, set them to undefined (not null) in the updates object', 'When migrating from deprecated properties, remove the old property and add the new one in the same operation', 'Use undefined to resolve mutual exclusivity validation errors between properties', 'Batch multiple property removals in a single updateNode operation for efficiency' ], pitfalls: [ '**REQUIRES N8N_API_URL and N8N_API_KEY environment variables** - will not work without n8n API access', 'Atomic mode (default): all operations must succeed or none are applied', 'continueOnError breaks atomic guarantees - use with caution', 'Order matters for dependent operations (e.g., must add node before connecting to it)', 'Node references accept ID or name, but name must be unique', 'Node names with special characters (apostrophes, quotes) work correctly', 'For best compatibility, prefer node IDs over names when dealing with special characters', 'Use "updates" property for updateNode operations: {type: "updateNode", updates: {...}}', 'Smart parameters (branch, case) only work with IF and Switch nodes - ignored for other node types', 'Explicit sourceIndex overrides smart parameters (branch, case) if both provided', '**CRITICAL**: For If nodes, ALWAYS use branch="true"/"false" instead of sourceIndex. Using sourceIndex=0 for multiple connections will put them ALL on the TRUE branch (main[0]), breaking your workflow logic!', '**CRITICAL**: For Switch nodes, ALWAYS use case=N instead of sourceIndex. Using same sourceIndex for multiple connections will put them on the same case output.', 'cleanStaleConnections removes ALL broken connections - cannot be selective', 'replaceConnections overwrites entire connections object - all previous connections lost', '**Auto-sanitization behavior**: Binary operators (equals, contains) automatically have singleValue removed; unary operators (isEmpty, isNotEmpty) automatically get singleValue:true added', '**Auto-sanitization runs on ALL nodes**: When ANY update is made, ALL nodes in the workflow are sanitized (not just modified ones)', '**Auto-sanitization cannot fix everything**: It fixes operator structures and missing metadata, but cannot fix broken connections or branch mismatches', '**Corrupted workflows beyond repair**: Workflows in paradoxical states (API returns corrupt, API rejects updates) cannot be fixed via API - must be recreated', 'Setting a property to null does NOT remove it - use undefined instead', 'When properties are mutually exclusive (e.g., continueOnFail and onError), setting only the new property will fail - you must remove the old one with undefined', 'Removing a required property may cause validation errors - check node documentation first', 'Nested property removal with dot notation only removes the specific nested field, not the entire parent object', 'Array index notation (e.g., "parameters.headers[0]") is not supported - remove the entire array property instead' ], relatedTools: ['n8n_update_full_workflow', 'n8n_get_workflow', 'validate_workflow', 'tools_documentation'] } };