Skip to main content
Glama

execute_parallel_mcp_client

Execute multiple AI tasks simultaneously to process arrays of parameters in parallel, returning structured JSON responses for efficient multi-agent interactions.

Instructions

Execute multiple AI tasks in parallel, with responses in JSON key-value pairs.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe base prompt to use for all executions
itemsYesArray of parameters to process in parallel

Implementation Reference

  • Dispatch handler for the 'execute_parallel_mcp_client' tool call. Parses input arguments, invokes the parallel execution method, formats results/errors as JSON in MCP response format, and handles exceptions.
    case 'execute_parallel_mcp_client': { const args = request.params.arguments as { prompt: string; items: string[] }; try { const { results, errors } = await this.executeParallel(args.prompt, args.items); return { content: [ { type: 'text', text: JSON.stringify({ results, errors }, null, 2), }, ], isError: errors.length > 0, }; } catch (error: any) { return { content: [ { type: 'text', text: `Error executing parallel MCP client commands: ${error?.message || 'Unknown error'}`, }, ], isError: true, }; } }
  • Core implementation of parallel execution: processes items in configurable concurrent chunks, executes MCP client commands via safeCommandPipe for each item-prompt pair, collects stdout as results and stderr/exceptions as errors.
    private async executeParallel(prompt: string, items: string[]): Promise<{results: any[], errors: string[]}> { const results: any[] = []; const errors: string[] = []; // Process items in chunks based on maxConcurrent for (let i = 0; i < items.length; i += this.maxConcurrent) { const chunk = items.slice(i, i + this.maxConcurrent); const promises = chunk.map(async (item) => { try { const { stdout, stderr } = await this.safeCommandPipe(`${prompt} ${item}`, this.executable, true); if (stdout) { results.push(stdout); } else if (stderr) { errors.push(`Error processing item "${item}": ${stderr}`); } } catch (error: any) { errors.push(`Failed to process item "${item}": ${error.message}`); } }); // Wait for current chunk to complete before processing next chunk await Promise.all(promises); } return { results, errors }; }
  • src/index.ts:221-241 (registration)
    Registers the 'execute_parallel_mcp_client' tool with the MCP server in the ListTools response, defining its name, description, and input schema.
    { name: 'execute_parallel_mcp_client', description: 'Execute multiple AI tasks in parallel, with responses in JSON key-value pairs.', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The base prompt to use for all executions', }, items: { type: 'array', items: { type: 'string' }, description: 'Array of parameters to process in parallel', }, }, required: ['prompt', 'items'], }, },
  • Input schema defining the expected arguments: prompt (string) and items (array of strings).
    inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The base prompt to use for all executions', }, items: { type: 'array', items: { type: 'string' }, description: 'Array of parameters to process in parallel', }, }, required: ['prompt', 'items'], },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tanevanwifferen/mcp-inception'

If you have feedback or need assistance with the MCP directory API, please join our Discord server