Skip to main content
Glama
AutomateLab-tech

automatelab-n8n-mcp

n8n_explain_execution

Diagnose failed n8n executions by pasting execution JSON to get a per-node summary of missing items, unresolved expressions, errors, and LLM token usage.

Instructions

Diagnose a failed or surprising n8n execution. Paste the execution JSON (from the n8n UI 'Show details' or GET /executions/:id?includeData=true); returns a per-node summary highlighting nodes that returned 0 items, unresolved ={{ ... }} expressions, errors with hints, and LLM token usage. Hits the most common debugging pain point: items 'silently disappearing' between nodes.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
executionYesn8n execution object as either a parsed object or a JSON string. Accepts both the REST API shape (`{ data: { resultData: { runData, error, lastNodeExecuted } }, finished, status, ... }`) and the raw `executionData` body returned by the UI.

Implementation Reference

  • Main handler function for the n8n_explain_execution tool. Parses execution JSON, validates structure, iterates over runData per node, calls analyseNode() to detect errors, zero-item outputs, unresolved expressions, and LLM token usage. Returns formatted findings.
    export async function explainExecution(rawArgs: unknown) {
    	const args = inputZod.parse(rawArgs);
    	const exec =
    		typeof args.execution === "string"
    			? safeParse(args.execution)
    			: args.execution;
    
    	const findings: Finding[] = [];
    
    	if (!exec || typeof exec !== "object" || Array.isArray(exec)) {
    		findings.push({
    			severity: "error",
    			message: "Execution payload is not a JSON object.",
    		});
    		return formatResult(findings);
    	}
    
    	const root = exec as Record<string, unknown>;
    	const data = pick(root, "data") ?? root;
    	const resultData = pick(data, "resultData");
    	if (!resultData || typeof resultData !== "object") {
    		findings.push({
    			severity: "error",
    			message:
    				"Execution has no `data.resultData`. This is not a complete n8n execution payload — re-export from the n8n UI ('Show details' -> 'Copy execution data') or the REST API (`GET /executions/:id?includeData=true`).",
    		});
    		return formatResult(findings);
    	}
    	const rd = resultData as Record<string, unknown>;
    	const runData = rd.runData;
    	const lastNode =
    		typeof rd.lastNodeExecuted === "string" ? rd.lastNodeExecuted : undefined;
    	const topError = rd.error;
    
    	const status = pickString(root, "status");
    	const finished = root.finished === true;
    	const mode = pickString(root, "mode");
    
    	if (status === "running" || (!finished && status !== "error")) {
    		findings.push({
    			severity: "warning",
    			message:
    				"Execution is not finished. Wait for it to complete before diagnosing — partial run data can look like a node 'silently dropped items' when it just hasn't run yet.",
    		});
    	}
    
    	if (topError && typeof topError === "object") {
    		const e = topError as Record<string, unknown>;
    		const msg = pickString(e, "message") ?? "(no message)";
    		const node = pickString(e, "node") ?? lastNode;
    		findings.push({
    			severity: "error",
    			node,
    			message: `Workflow-level error: ${msg}`,
    			hint: hintForError(msg),
    		});
    	}
    
    	if (!runData || typeof runData !== "object") {
    		findings.push({
    			severity: "warning",
    			message:
    				"No `runData` present. The workflow probably failed before any node ran (trigger error or invalid expression in workflow settings).",
    		});
    		return formatResult(findings);
    	}
    
    	const runDataObj = runData as Record<string, unknown>;
    	const nodeNames = Object.keys(runDataObj);
    	if (nodeNames.length === 0) {
    		findings.push({
    			severity: "warning",
    			message: "`runData` is empty. No nodes executed.",
    		});
    		return formatResult(findings);
    	}
    
    	for (const nodeName of nodeNames) {
    		const runs = runDataObj[nodeName];
    		if (!Array.isArray(runs)) continue;
    		analyseNode(nodeName, runs, findings, mode);
    	}
    
    	if (lastNode && !findings.some((f) => f.node === lastNode)) {
    		findings.push({
    			severity: "info",
    			node: lastNode,
    			message: `Last node executed was "${lastNode}". If the workflow stopped here unexpectedly, check its output items below.`,
    		});
    	}
    
    	if (findings.length === 0) {
    		findings.push({
    			severity: "info",
    			message: "No problems detected. Execution finished cleanly.",
    		});
    	}
    
    	return formatResult(findings);
    }
  • Input schema for n8n_explain_execution: expects a single 'execution' property that can be either a parsed object or a JSON string containing the n8n execution payload.
    export const explainExecutionInputSchema = {
    	type: "object",
    	properties: {
    		execution: {
    			description:
    				"n8n execution object as either a parsed object or a JSON string. Accepts both the REST API shape (`{ data: { resultData: { runData, error, lastNodeExecuted } }, finished, status, ... }`) and the raw `executionData` body returned by the UI.",
    			oneOf: [{ type: "object" }, { type: "string" }],
    		},
    	},
    	required: ["execution"],
    } as const;
  • analyseNode helper function that inspects each node's run data for errors, zero-item outputs, unresolved expressions, and AI/LLM token usage.
    function analyseNode(
    	nodeName: string,
    	runs: unknown[],
    	findings: Finding[],
    	mode: string | undefined,
    ) {
    	for (let runIdx = 0; runIdx < runs.length; runIdx++) {
    		const run = runs[runIdx];
    		if (!run || typeof run !== "object") continue;
    		const r = run as Record<string, unknown>;
    
    		if (r.error && typeof r.error === "object") {
    			const e = r.error as Record<string, unknown>;
    			const message = pickString(e, "message") ?? "(no message)";
    			const description = pickString(e, "description");
    			findings.push({
    				severity: "error",
    				node: nodeName,
    				message: description ? `${message} - ${description}` : message,
    				hint: hintForError(message),
    			});
    			continue;
    		}
    
    		const nodeData = r.data;
    		const main = pickArray(nodeData, "main");
    		const ai = collectAiOutputs(nodeData);
    
    		if (!main && ai.length === 0) {
    			findings.push({
    				severity: "warning",
    				node: nodeName,
    				message:
    					"Ran but produced no output. Likely a no-op or upstream gave it nothing to iterate on.",
    			});
    			continue;
    		}
    
    		if (main && main.length > 0) {
    			let totalItems = 0;
    			for (const branch of main) {
    				if (Array.isArray(branch)) totalItems += branch.length;
    			}
    			if (totalItems === 0) {
    				findings.push({
    					severity: "warning",
    					node: nodeName,
    					message:
    						runs.length > 1
    							? `Run #${runIdx + 1} returned 0 items.`
    							: "Returned 0 items. Downstream nodes will not execute.",
    					hint:
    						"Common causes: (1) IF/Switch routed to the other branch — check `parameters.conditions`. (2) Filter/Set node dropped everything — inspect its output explicitly. (3) Code node returned `[]` or `null` instead of an array of `{ json: ... }` objects.",
    				});
    			}
    
    			const firstItem = firstJson(main);
    			if (firstItem && hasUnresolvedExpression(firstItem)) {
    				findings.push({
    					severity: "warning",
    					node: nodeName,
    					message:
    						"Output contains an unresolved `={{ ... }}` expression. n8n stored the literal expression instead of evaluating it.",
    					hint:
    						"Almost always: (1) referenced node hadn't run yet for this item — fix the workflow order. (2) `$json.foo` accessed when `foo` was undefined — pre-check with `$json.foo ?? 'fallback'`. (3) typo in `$('Node Name')` — node names are case-sensitive.",
    				});
    			}
    		}
    
    		for (const aiOut of ai) {
    			const tokens = extractTokens(aiOut);
    			if (tokens) {
    				findings.push({
    					severity: "info",
    					node: nodeName,
    					message: `LLM call: ${tokens.input ?? "?"} input + ${
    						tokens.output ?? "?"
    					} output tokens${tokens.model ? ` (${tokens.model})` : ""}.`,
    				});
    			}
    		}
    	}
    
    	if (mode === "manual" && runs.length === 0) {
    		findings.push({
    			severity: "warning",
    			node: nodeName,
    			message:
    				"Node is in `runData` but has no runs. n8n usually prunes these — check whether you're looking at a partial test run.",
    		});
    	}
    }
  • src/index.ts:58-63 (registration)
    Tool registration in the MCP server's tool list: defines the tool as 'n8n_explain_execution' with description and inputSchema.
    {
    	name: "n8n_explain_execution",
    	description:
    		"Diagnose a failed or surprising n8n execution. Paste the execution JSON (from the n8n UI 'Show details' or `GET /executions/:id?includeData=true`); returns a per-node summary highlighting nodes that returned 0 items, unresolved `={{ ... }}` expressions, errors with hints, and LLM token usage. Hits the most common debugging pain point: items 'silently disappearing' between nodes.",
    	inputSchema: explainExecutionInputSchema,
    },
  • src/index.ts:117-118 (registration)
    Handler dispatch: routes the 'n8n_explain_execution' tool call to the explainExecution function in the switch statement.
    case "n8n_explain_execution":
    	return explainExecution(args ?? {});
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description fully discloses behavior: it returns a per-node summary highlighting nodes that returned 0 items, unresolved expressions, errors with hints, and LLM token usage. It also mentions the common pain point of items 'silently disappearing', which adds valuable context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is two sentences, front-loaded with the core purpose, and every sentence adds value. It is efficient and structured well for quick understanding.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no output schema, the description adequately explains what the return contains. The parameter is thoroughly described. The tool's complexity is moderate, and the description covers the necessary context for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The single parameter 'execution' has 100% schema coverage. The description adds significant meaning beyond the schema by clarifying acceptable formats (parsed object or JSON string) and providing specific shape examples (REST API and UI body). This removes ambiguity.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Diagnose a failed or surprising n8n execution.' It specifies the input (execution JSON) and the output (per-node summary highlighting common issues). This distinguishes it from sibling tools like n8n_list_executions, which only list executions.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly says when to use the tool ('failed or surprising execution') and provides detailed guidance on input format, including the two accepted shapes (REST API and UI). It does not explicitly state when not to use it, but the context is clear enough for an agent to infer.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/AutomateLab-tech/n8n-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server