Skip to main content
Glama

run_once

Execute a workflow manually one time to trigger immediate processing and receive detailed execution results for testing or ad-hoc automation tasks.

Instructions

Execute a workflow manually once and return execution details

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
inputNo
workflowIdYes

Implementation Reference

  • src/index.ts:200-200 (registration)
    Tool registration in list_tools response, including name, description, and input schema definition.
    { name: 'run_once', description: 'Execute a workflow manually once and return execution details', inputSchema: { type: 'object', properties: { workflowId: { oneOf: [{ type: 'string' }, { type: 'number' }] }, input: { type: 'object' } }, required: ['workflowId'] } },
  • MCP tool handler: resolves workflow ID alias, delegates execution to N8nClient.runOnce, and formats JSON success response.
    private async handleRunOnce(args: { workflowId: string | number; input?: any }) { const workflowId = this.resolveWorkflowId(args.workflowId); const execution = await this.n8nClient.runOnce(workflowId, args.input); return { content: [{ type: 'text', text: JSON.stringify(jsonSuccess(execution), null, 2) }] };
  • Dispatch case in CallToolRequestHandler that routes 'run_once' tool calls to handleRunOnce method.
    case 'run_once': return await this.handleRunOnce(request.params.arguments as { workflowId: string | number; input?: any });
  • Core helper method in N8nClient: fetches workflow, determines execution strategy based on trigger nodes, and calls n8n API (/executions or /workflows/{id}/execute) to run workflow once.
    async runOnce(workflowId: string | number, input?: any): Promise<N8nExecutionResponse> { try { const workflow = await this.getWorkflow(workflowId); const hasTriggerNodes = workflow.nodes.some( (node) => node.type === 'n8n-nodes-base.webhook' || node.type === 'n8n-nodes-base.cron' || node.type.includes('trigger'), ); if (hasTriggerNodes) { const executionData = { workflowData: workflow, runData: input || {} }; const response = await this.api.post<N8nApiResponse<any>>('/executions', executionData); return { executionId: response.data.data.id || response.data.data.executionId, status: response.data.data.status || 'running' }; } else { const response = await this.api.post<N8nApiResponse<any>>(`/workflows/${workflowId}/execute`, { data: input || {} }); return { executionId: response.data.data.id || response.data.data.executionId, status: response.data.data.status || 'running' }; } } catch (error: any) { if (error instanceof Error && error.message.includes('404')) { throw new Error(`Workflow ${workflowId} not found or cannot be executed manually`); } throw error; } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/get2knowio/n8n-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server