Skip to main content
Glama

execute_parallel_mcp_client

Execute multiple AI tasks in parallel using JSON key-value pairs, enabling efficient processing of array parameters and a shared base prompt for streamlined multi-agent interactions.

Instructions

Execute multiple AI tasks in parallel, with responses in JSON key-value pairs.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
itemsYesArray of parameters to process in parallel
promptYesThe base prompt to use for all executions

Implementation Reference

  • The main tool handler in the CallToolRequestSchema switch statement. It extracts prompt and items from arguments, calls executeParallel, and returns the results and errors as JSON or an error message.
    case 'execute_parallel_mcp_client': { const args = request.params.arguments as { prompt: string; items: string[] }; try { const { results, errors } = await this.executeParallel(args.prompt, args.items); return { content: [ { type: 'text', text: JSON.stringify({ results, errors }, null, 2), }, ], isError: errors.length > 0, }; } catch (error: any) { return { content: [ { type: 'text', text: `Error executing parallel MCP client commands: ${error?.message || 'Unknown error'}`, }, ], isError: true, }; } }
  • Private method implementing the parallel execution: chunks items by maxConcurrent, pipes each 'prompt item' to the executable concurrently per chunk, collects stdout as results or stderr/errors.
    private async executeParallel(prompt: string, items: string[]): Promise<{results: any[], errors: string[]}> { const results: any[] = []; const errors: string[] = []; // Process items in chunks based on maxConcurrent for (let i = 0; i < items.length; i += this.maxConcurrent) { const chunk = items.slice(i, i + this.maxConcurrent); const promises = chunk.map(async (item) => { try { const { stdout, stderr } = await this.safeCommandPipe(`${prompt} ${item}`, this.executable, true); if (stdout) { results.push(stdout); } else if (stderr) { errors.push(`Error processing item "${item}": ${stderr}`); } } catch (error: any) { errors.push(`Failed to process item "${item}": ${error.message}`); } }); // Wait for current chunk to complete before processing next chunk await Promise.all(promises); } return { results, errors }; }
  • src/index.ts:221-241 (registration)
    Tool definition registered in ListToolsRequestSchema handler, including name, description, and input schema.
    { name: 'execute_parallel_mcp_client', description: 'Execute multiple AI tasks in parallel, with responses in JSON key-value pairs.', inputSchema: { type: 'object', properties: { prompt: { type: 'string', description: 'The base prompt to use for all executions', }, items: { type: 'array', items: { type: 'string' }, description: 'Array of parameters to process in parallel', }, }, required: ['prompt', 'items'], }, },

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/tanevanwifferen/mcp-inception'

If you have feedback or need assistance with the MCP directory API, please join our Discord server