Skip to main content
Glama

analyze_n8n_workflow

Analyze n8n workflow JSON to identify efficiency issues, improve error handling, and apply best practices for optimization.

Instructions

Analyze and optimize n8n workflow JSON for efficiency, error handling, and best practices

WORKFLOW: Perfect for understanding complex code, identifying issues, and technical debt assessment TIP: Use Desktop Commander to read files, then pass content here for analysis SAVES: Claude context for strategic decisions

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
analysisDepthNoLevel of analysis detaildetailed
analysisTypeNoType of analysis to performcomprehensive
codeNoThe code to analyze (for single-file analysis)
filePathNoPath to single file to analyze
filesNoArray of specific file paths (for multi-file analysis)
includeCredentialCheckNoCheck for exposed credentials
languageNoProgramming languagejavascript
maxDepthNoMaximum directory depth for multi-file discovery (1-5)
optimizationFocusNoPrimary optimization focusall
projectPathNoPath to project root (for multi-file analysis)
suggestAlternativeNodesNoSuggest alternative node configurations
workflowNon8n workflow JSON object

Implementation Reference

  • Tool registration: class definition with name, category, and description.
    export class N8nWorkflowAnalyzer extends BasePlugin implements IPromptPlugin { name = 'analyze_n8n_workflow'; category = 'analyze' as const; description = 'Analyze and optimize n8n workflow JSON for efficiency, error handling, and best practices';
  • Input parameters schema definition for the analyze_n8n_workflow tool.
    parameters = { // Single-file parameters code: { type: 'string' as const, description: 'The code to analyze (for single-file analysis)', required: false }, filePath: { type: 'string' as const, description: 'Path to single file to analyze', required: false }, // Multi-file parameters projectPath: { type: 'string' as const, description: 'Path to project root (for multi-file analysis)', required: false }, files: { type: 'array' as const, description: 'Array of specific file paths (for multi-file analysis)', required: false, items: { type: 'string' as const } }, maxDepth: { type: 'number' as const, description: 'Maximum directory depth for multi-file discovery (1-5)', required: false, default: 3 }, // n8n-specific parameters workflow: { type: 'object' as const, description: 'n8n workflow JSON object', required: false }, optimizationFocus: { type: 'string' as const, description: 'Primary optimization focus', enum: ['performance', 'error-handling', 'maintainability', 'all'], default: 'all', required: false }, includeCredentialCheck: { type: 'boolean' as const, description: 'Check for exposed credentials', default: true, required: false }, suggestAlternativeNodes: { type: 'boolean' as const, description: 'Suggest alternative node configurations', default: true, required: false }, // Universal parameters language: { type: 'string' as const, description: 'Programming language', required: false, default: 'javascript' }, analysisDepth: { type: 'string' as const, description: 'Level of analysis detail', enum: ['basic', 'detailed', 'comprehensive'], default: 'detailed', required: false }, analysisType: { type: 'string' as const, description: 'Type of analysis to perform', enum: ['workflow', 'security', 'comprehensive'], default: 'comprehensive', required: false } };
  • Core handler: execute method orchestrates security checks, parameter validation, model setup, analysis mode detection, and routes to single-file or multi-file analysis implementations.
    async execute(params: any, llmClient: any) { return await withSecurity(this, params, llmClient, async (secureParams) => { try { // 1. Auto-detect analysis mode based on parameters const analysisMode = this.detectAnalysisMode(secureParams); // 2. Validate parameters based on detected mode this.validateParameters(secureParams, analysisMode); // 3. Setup model const { model, contextLength } = await ModelSetup.getReadyModel(llmClient); // 4. Route to appropriate analysis method if (analysisMode === 'single-file') { return await this.executeSingleFileAnalysis(secureParams, model, contextLength); } else { return await this.executeMultiFileAnalysis(secureParams, model, contextLength); } } catch (error: any) { return ErrorHandler.createExecutionError('analyze_n8n_workflow', error); } }); }
  • Helper: Generates prompt stages for single-file n8n workflow analysis, including system context, data payload, and detailed output instructions.
    private getSingleFilePromptStages(params: any): PromptStages { const { workflow, optimizationFocus, includeCredentialCheck, suggestAlternativeNodes, analysisDepth } = params; const systemAndContext = `You are an expert n8n workflow optimization specialist with extensive experience in automation, API integration, and workflow efficiency. Analysis Context: - Optimization Focus: ${optimizationFocus} - Analysis Depth: ${analysisDepth} - Include Credential Check: ${includeCredentialCheck} - Suggest Alternative Nodes: ${suggestAlternativeNodes} - Mode: Single Workflow Analysis Your expertise covers: - n8n node configurations and best practices - API optimization and rate limiting strategies - Error handling and workflow resilience - Performance optimization and parallel processing - Security assessment for automation workflows - Workflow maintainability and organization Your task is to provide comprehensive analysis and actionable optimization recommendations for this n8n workflow.`; const dataPayload = `n8n Workflow to Analyze: \`\`\`json ${JSON.stringify(workflow, null, 2)} \`\`\``; const outputInstructions = `Analyze this n8n workflow and provide a comprehensive optimization report in the following structured format: ## Workflow Analysis Summary - Overall complexity assessment (node count: ${workflow?.nodes?.length || 0}) - Flow efficiency rating - Primary optimization opportunities identified ## Detailed Analysis ### 1. Efficiency Issues - Redundant nodes or operations - Duplicate API calls that could be consolidated - Unnecessary data transformations - Node consolidation opportunities ### 2. Error Handling Review - Missing error handling (Error Trigger nodes) - Proper try-catch pattern implementation - Retry configurations and strategies - Error notification setup ### 3. Performance Optimization - Bottlenecks and synchronous operations - Parallel processing opportunities - API rate limiting considerations - Memory usage with large datasets ${includeCredentialCheck ? ` ### 4. Security Assessment - Exposed credentials or API keys in node configurations - Sensitive data in logs or outputs - Webhook authentication security - Input sanitization validation ` : ''} ### 5. Maintainability Improvements - Node naming conventions and clarity - Workflow organization and logical grouping - Sub-workflow opportunities for reusability - Documentation completeness ${suggestAlternativeNodes ? ` ### 6. Alternative Node Suggestions - More efficient node alternatives for current setup - Built-in vs custom code node recommendations - Community node suggestions for better functionality - Simpler implementation approaches ` : ''} ## Implementation Recommendations 1. **Priority Changes** (high impact, low effort first) 2. **Performance Improvements** with expected metrics 3. **Step-by-step Implementation Guide** 4. **Risk Assessment** for proposed changes ## Optimized Workflow Structure Provide specific suggestions for structural improvements to the workflow design. ${this.getOptimizationFocusInstructions(optimizationFocus)} **Important**: Reference specific node names and IDs from the workflow. Provide actionable recommendations with clear business impact and implementation steps.`; return { systemAndContext, dataPayload, outputInstructions }; }
  • Output response schema: AnalyzeN8nWorkflowResponse interface defining the structure of the tool's response.
    export interface AnalyzeN8nWorkflowResponse extends BaseResponse { data: { summary: { nodeCount: number; connectionCount: number; complexity: "simple" | "moderate" | "complex"; estimatedExecutionTime?: string; hasErrorHandling: boolean; hasCredentialIssues: boolean; }; issues: Array<{ nodeId: string; nodeName: string; type: "error" | "warning" | "suggestion"; category: "performance" | "error-handling" | "security" | "structure"; message: string; fix?: string; }>; optimizations: Array<{ type: "merge-nodes" | "parallel-execution" | "caching" | "batch-processing"; description: string; nodes: string[]; estimatedImprovement?: string; }>; alternativeNodes?: Array<{ currentNode: string; suggestedNode: string; reason: string; }>; credentials?: { exposed: boolean; issues: string[]; }; }; }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/houtini-ai/lm'

If you have feedback or need assistance with the MCP directory API, please join our Discord server