Skip to main content
Glama

execute_flow

Execute API testing workflows with sequential or parallel endpoint testing using configurable variables, timeout settings, and error handling options.

Instructions

Execute a flow with sequential or parallel endpoint testing

Input Schema

NameRequiredDescriptionDefault
flowIdYesID of the flow to execute
variablesNoVariables for flow interpolation (JSON string or object, or comma-separated key=value pairs)
modeNoExecution mode (sequential or parallel)
timeoutNoFlow timeout in milliseconds
stopOnErrorNoStop execution on first error
maxConcurrencyNoMaximum concurrent steps for parallel execution
dryRunNoRun in dry-run mode (no actual HTTP requests)

Input Schema (JSON Schema)

{ "properties": { "dryRun": { "description": "Run in dry-run mode (no actual HTTP requests)", "type": "boolean" }, "flowId": { "description": "ID of the flow to execute", "type": "string" }, "maxConcurrency": { "description": "Maximum concurrent steps for parallel execution", "type": "number" }, "mode": { "description": "Execution mode (sequential or parallel)", "enum": [ "sequential", "parallel" ], "type": "string" }, "stopOnError": { "description": "Stop execution on first error", "type": "boolean" }, "timeout": { "description": "Flow timeout in milliseconds", "type": "number" }, "variables": { "description": "Variables for flow interpolation (JSON string or object, or comma-separated key=value pairs)", "type": "string" } }, "required": [ "flowId" ], "type": "object" }

Implementation Reference

  • The primary handler function that implements the execute_flow tool logic: validates input, fetches flow data, processes steps and variables, executes the flow (sequential/parallel/dry-run), manages state, and returns formatted results.
    export async function handleExecuteFlow(args: any): Promise<McpToolResponse> { try { const { flowId, variables, mode, timeout, stopOnError, maxConcurrency, dryRun } = args; if (!flowId) { return { content: [ { type: 'text', text: JSON.stringify({ success: false, error: 'Flow ID is required' }, null, 2) } ] }; } const instances = await getInstances(); const flowDetails = await instances.backendClient.getFlowDetails(flowId); if (!flowDetails.success || !flowDetails.data) { return { content: [ { type: 'text', text: JSON.stringify({ success: false, error: flowDetails.message || 'Failed to get flow details' }, null, 2) } ] }; } const flowData = flowDetails.data; if (!flowData.flow_data || !flowData.flow_data.steps) { return { content: [ { type: 'text', text: JSON.stringify({ success: false, error: 'Flow has no steps to execute' }, null, 2) } ] }; } // Skip endpoints resolution for direct execution (steps have complete URLs) const endpoints: any[] = []; // Build flow steps const steps = flowData.flow_data.steps.map((step: any) => { if (step.endpointId) { const endpoint = endpoints.find((ep: any) => ep.id === step.endpointId); if (endpoint) { return { id: step.id, name: step.name || endpoint.name, method: endpoint.method, url: endpoint.url, headers: parseHeaders(endpoint.headers), body: parseBody(endpoint.body), timeout: step.timeout || flowData.flow_data.config?.timeout, expectedStatus: step.expectedStatus, description: step.description }; } } return { id: step.id, name: step.name, method: step.method, url: step.url, headers: parseHeaders(step.headers), body: parseBody(step.body), timeout: step.timeout || flowData.flow_data.config?.timeout, expectedStatus: step.expectedStatus, description: step.description }; }); // Initialize variables let flowVariables: Record<string, any> = {}; // Parse flow inputs if provided if (flowData.flow_inputs) { try { flowVariables = JSON.parse(flowData.flow_inputs); } catch (e) { // Try parsing as key=value pairs const inputVars: Record<string, string> = {}; flowData.flow_inputs.split(',').forEach((pair: string) => { const [key, value] = pair.split('=').map(s => s.trim()); if (key) { inputVars[key] = value || ''; } }); flowVariables = inputVars; } } // Merge with provided variables if (variables) { if (typeof variables === 'string') { try { const parsedVars = JSON.parse(variables); flowVariables = { ...flowVariables, ...parsedVars }; } catch (e) { // Try parsing as key=value pairs const inputVars: Record<string, string> = {}; variables.split(',').forEach((pair: string) => { const [key, value] = pair.split('=').map(s => s.trim()); if (key) { inputVars[key] = value || ''; } }); flowVariables = { ...flowVariables, ...inputVars }; } } else { flowVariables = { ...flowVariables, ...variables }; } } // Apply interpolation to variables for (const [key, value] of Object.entries(flowVariables)) { if (typeof value === 'string') { flowVariables[key] = interpolateVariables(value, flowVariables); } } // Create execution options const options: FlowExecutionOptions = { mode: mode || (flowData.flow_data.config?.parallel ? 'parallel' : 'sequential'), timeout: timeout || flowData.flow_data.config?.timeout, stopOnError: stopOnError !== undefined ? stopOnError : flowData.flow_data.config?.stopOnError, maxConcurrency: maxConcurrency || flowData.flow_data.config?.maxConcurrency, dryRun: dryRun || false }; // Execute flow const httpClient = new HttpClient(); const executor = new FlowExecutor(httpClient); const stateManager = new FlowStateManager(); // Create flow state const flowState = stateManager.createFlowState(flowId, steps); stateManager.startFlow(flowId); if (dryRun) { // Return dry run results return { content: [ { type: 'text', text: JSON.stringify({ success: true, data: { flowId, status: 'dry_run', executionTime: 0, nodeResults: steps.map((step: any) => ({ stepId: step.id, stepName: step.name, success: true, executionTime: 0, request: { method: step.method, url: interpolateVariables(step.url || '', flowVariables), headers: step.headers, body: step.body } })), errors: [], timestamp: new Date().toISOString(), dryRun: true } }, null, 2) } ] }; } // Execute actual flow const executionResult = await executor.executeFlow(steps, flowVariables, options); // Update flow state if (executionResult.success) { stateManager.completeFlow(flowId, true); } else { stateManager.completeFlow(flowId, false); } const result: FlowExecutionResult = { success: executionResult.success, data: { flowId, status: executionResult.success ? 'completed' : 'failed', executionTime: executionResult.executionTime, nodeResults: executionResult.results, errors: executionResult.errors, timestamp: new Date().toISOString() }, message: executionResult.success ? 'Flow executed successfully' : 'Flow execution failed' }; return { content: [ { type: 'text', text: JSON.stringify(result, null, 2) } ] }; } catch (error: any) { return { content: [ { type: 'text', text: JSON.stringify({ success: false, error: error.message || 'Unknown error occurred during flow execution' }, null, 2) } ] }; } }
  • Tool definition with input schema specifying parameters (flowId required, variables, execution options like mode, timeout, dryRun) and links to the handler.
    export const executeFlowTool: McpTool = { name: 'execute_flow', description: 'Execute a flow with sequential or parallel endpoint testing', inputSchema: { type: 'object', properties: { flowId: { type: 'string', description: 'ID of the flow to execute' }, variables: { type: 'string', description: 'Variables for flow interpolation (JSON string or object, or comma-separated key=value pairs)' }, mode: { type: 'string', enum: ['sequential', 'parallel'], description: 'Execution mode (sequential or parallel)' }, timeout: { type: 'number', description: 'Flow timeout in milliseconds' }, stopOnError: { type: 'boolean', description: 'Stop execution on first error' }, maxConcurrency: { type: 'number', description: 'Maximum concurrent steps for parallel execution' }, dryRun: { type: 'boolean', description: 'Run in dry-run mode (no actual HTTP requests)' } }, required: ['flowId'] }, handler: handleExecuteFlow };
  • Dynamic registration of the execute_flow tool handler in the main tool index, importing and delegating to the specific handler.
    'execute_flow': async (args: any) => { const { handleExecuteFlow } = await import('./flows/handlers/executeHandler.js'); return handleExecuteFlow(args);

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/martin-1103/mcp2'

If you have feedback or need assistance with the MCP directory API, please join our Discord server