Skip to main content
Glama

run_agent_across_list

Execute AI coding agents in parallel batches to analyze, refactor, or generate code across multiple files using automated permission handling.

Instructions

Spawns an AI coding agent for each item in a previously created list. Agents run in batches of 10 parallel processes with automatic permission skipping enabled.

WHEN TO USE:

  • Performing complex code analysis, refactoring, or generation across multiple files

  • Tasks that require AI reasoning rather than simple shell commands

  • When you need to delegate work to multiple AI agents working in parallel

AVAILABLE AGENTS:

  • claude: Claude Code CLI (uses --dangerously-skip-permissions for autonomous operation)

  • gemini: Google Gemini CLI (uses --yolo for auto-accept)

  • codex: OpenAI Codex CLI (uses --dangerously-bypass-approvals-and-sandbox for autonomous operation)

  • opencode: OpenCode CLI (uses run command for non-interactive autonomous operation)

HOW IT WORKS:

  1. Each item in the list is substituted into the prompt where {{item}} appears

  2. Agents run in batches of 10 at a time to avoid overwhelming the system

  3. Output streams directly to files as the agents work

  4. This tool waits for all agents to complete before returning

AFTER COMPLETION:

  • Read the stdout files to check the results from each agent

  • Check stderr files if you encounter errors

  • Files are named based on the item (e.g., "myfile.ts.stdout.txt")

VARIABLE SUBSTITUTION:

  • Use {{item}} in your prompt - it will be replaced with each list item

  • Example: "Review {{item}} for bugs" becomes "Review src/file.ts for bugs" for item "src/file.ts"

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
list_idYesThe list ID returned by create_list. This identifies which list of items to iterate over.
agentYesWhich AI agent to use: 'claude', 'gemini', 'codex', 'opencode'. All agents run with permission-skipping flags for autonomous operation.
promptYesThe prompt to send to each agent. Use {{item}} as a placeholder - it will be replaced with the current item value. Example: 'Review {{item}} and suggest improvements' or 'Add error handling to {{item}}'

Implementation Reference

  • The main handler function that retrieves the list, constructs agent commands for each item (replacing {{item}} in prompt), spawns the selected AI agent CLI processes in parallel batches using runInBatches, streams stdout/stderr to files, and returns file locations.
    async ({ list_id, agent, prompt }) => { const items = lists.get(list_id); if (!items) { return { content: [ { type: "text", text: `Error: No list found with ID "${list_id}". Please call create_list first to create a list of items, then use the returned ID with this tool.`, }, ], isError: true, }; } // Create output directory const runId = randomUUID(); const runDir = join(outputDir, runId); await mkdir(runDir, { recursive: true }); const results: Array<{ item: string; files: OutputFiles }> = []; const tasks: Array<{ command: string; stdoutFile: string; stderrFile: string; }> = []; // Build the agent command with skip permission flags and streaming output // Additional args can be passed via PAR5_AGENT_ARGS (all agents) or PAR5_CLAUDE_ARGS, PAR5_GEMINI_ARGS, PAR5_CODEX_ARGS (per-agent) const getAgentCommand = ( agentName: string, expandedPrompt: string, ): string => { const escapedPrompt = expandedPrompt.replace(/'/g, "'\\''"); const agentArgs = process.env.PAR5_AGENT_ARGS || ""; switch (agentName) { case "claude": { // Claude Code CLI with --dangerously-skip-permissions and streaming output const claudeArgs = process.env.PAR5_CLAUDE_ARGS || ""; return `claude --dangerously-skip-permissions --output-format text --verbose ${agentArgs} ${claudeArgs} -p '${escapedPrompt}'`; } case "gemini": { // Gemini CLI with yolo mode and streaming JSON output const geminiArgs = process.env.PAR5_GEMINI_ARGS || ""; return `gemini --yolo --output-format text ${agentArgs} ${geminiArgs} '${escapedPrompt}'`; } case "codex": { // Codex CLI exec subcommand with full-auto flag and JSON streaming output const codexArgs = process.env.PAR5_CODEX_ARGS || ""; return `codex exec --dangerously-bypass-approvals-and-sandbox ${agentArgs} ${codexArgs} '${escapedPrompt}'`; } case "opencode": { // OpenCode CLI run command for non-interactive autonomous operation const opencodeArgs = process.env.PAR5_OPENCODE_ARGS || ""; return `opencode run ${agentArgs} ${opencodeArgs} '${escapedPrompt}'`; } default: throw new Error(`Unknown agent: ${agentName}`); } }; for (let i = 0; i < items.length; i++) { const item = items[i]; // Replace {{item}} with the actual item value const expandedPrompt = prompt.replace(/\{\{item\}\}/g, item); const safeFilename = toSafeFilename(item); const stdoutFile = join(runDir, `${safeFilename}.stdout.txt`); const stderrFile = join(runDir, `${safeFilename}.stderr.txt`); tasks.push({ command: getAgentCommand(agent, expandedPrompt), stdoutFile, stderrFile, }); results.push({ item, files: { stdout: stdoutFile, stderr: stderrFile }, }); } // Run agents in batches of 10 await runInBatches(tasks); // Build prose response const fileList = results .map( (r) => `- ${r.item}: stdout at "${r.files.stdout}", stderr at "${r.files.stderr}"`, ) .join("\n"); const agentNames: Record<string, string> = { claude: "Claude Code", gemini: "Google Gemini", codex: "OpenAI Codex", opencode: "OpenCode", }; const numBatches = Math.ceil(items.length / BATCH_SIZE); return { content: [ { type: "text", text: `Completed ${results.length} ${agentNames[agent]} agents in ${numBatches} batch(es) of up to ${BATCH_SIZE} parallel agents each. Output has been streamed to files. OUTPUT FILES: ${fileList} NEXT STEPS: 1. Read the stdout files to check the results from each agent 2. If there are errors, check the corresponding stderr files for details All agents have completed and output files are ready to read.`, }, ], }; }, );
  • Zod input schema validating list_id (string), agent (enum of enabled agents: claude, gemini, codex, opencode), and prompt (string with {{item}} placeholder).
    inputSchema: { list_id: z .string() .describe( "The list ID returned by create_list. This identifies which list of items to iterate over.", ), agent: z .enum(ENABLED_AGENTS as unknown as [string, ...string[]]) .describe( `Which AI agent to use: ${ENABLED_AGENTS.map((a) => `'${a}'`).join(", ")}. All agents run with permission-skipping flags for autonomous operation.`, ), prompt: z .string() .describe( "The prompt to send to each agent. Use {{item}} as a placeholder - it will be replaced with the current item value. Example: 'Review {{item}} and suggest improvements' or 'Add error handling to {{item}}'", ), }, },
  • src/index.ts:605-771 (registration)
    Registers the run_agent_across_list tool with McpServer, including detailed description, input schema, and handler function.
    server.registerTool( "run_agent_across_list", { description: `Spawns an AI coding agent for each item in a previously created list. Agents run in batches of ${BATCH_SIZE} parallel processes with automatic permission skipping enabled. WHEN TO USE: - Performing complex code analysis, refactoring, or generation across multiple files - Tasks that require AI reasoning rather than simple shell commands - When you need to delegate work to multiple AI agents working in parallel AVAILABLE AGENTS: ${availableAgentsDoc} HOW IT WORKS: 1. Each item in the list is substituted into the prompt where {{item}} appears 2. Agents run in batches of ${BATCH_SIZE} at a time to avoid overwhelming the system 3. Output streams directly to files as the agents work 4. This tool waits for all agents to complete before returning AFTER COMPLETION: - Read the stdout files to check the results from each agent - Check stderr files if you encounter errors - Files are named based on the item (e.g., "myfile.ts.stdout.txt") VARIABLE SUBSTITUTION: - Use {{item}} in your prompt - it will be replaced with each list item - Example: "Review {{item}} for bugs" becomes "Review src/file.ts for bugs" for item "src/file.ts"`, inputSchema: { list_id: z .string() .describe( "The list ID returned by create_list. This identifies which list of items to iterate over.", ), agent: z .enum(ENABLED_AGENTS as unknown as [string, ...string[]]) .describe( `Which AI agent to use: ${ENABLED_AGENTS.map((a) => `'${a}'`).join(", ")}. All agents run with permission-skipping flags for autonomous operation.`, ), prompt: z .string() .describe( "The prompt to send to each agent. Use {{item}} as a placeholder - it will be replaced with the current item value. Example: 'Review {{item}} and suggest improvements' or 'Add error handling to {{item}}'", ), }, }, async ({ list_id, agent, prompt }) => { const items = lists.get(list_id); if (!items) { return { content: [ { type: "text", text: `Error: No list found with ID "${list_id}". Please call create_list first to create a list of items, then use the returned ID with this tool.`, }, ], isError: true, }; } // Create output directory const runId = randomUUID(); const runDir = join(outputDir, runId); await mkdir(runDir, { recursive: true }); const results: Array<{ item: string; files: OutputFiles }> = []; const tasks: Array<{ command: string; stdoutFile: string; stderrFile: string; }> = []; // Build the agent command with skip permission flags and streaming output // Additional args can be passed via PAR5_AGENT_ARGS (all agents) or PAR5_CLAUDE_ARGS, PAR5_GEMINI_ARGS, PAR5_CODEX_ARGS (per-agent) const getAgentCommand = ( agentName: string, expandedPrompt: string, ): string => { const escapedPrompt = expandedPrompt.replace(/'/g, "'\\''"); const agentArgs = process.env.PAR5_AGENT_ARGS || ""; switch (agentName) { case "claude": { // Claude Code CLI with --dangerously-skip-permissions and streaming output const claudeArgs = process.env.PAR5_CLAUDE_ARGS || ""; return `claude --dangerously-skip-permissions --output-format text --verbose ${agentArgs} ${claudeArgs} -p '${escapedPrompt}'`; } case "gemini": { // Gemini CLI with yolo mode and streaming JSON output const geminiArgs = process.env.PAR5_GEMINI_ARGS || ""; return `gemini --yolo --output-format text ${agentArgs} ${geminiArgs} '${escapedPrompt}'`; } case "codex": { // Codex CLI exec subcommand with full-auto flag and JSON streaming output const codexArgs = process.env.PAR5_CODEX_ARGS || ""; return `codex exec --dangerously-bypass-approvals-and-sandbox ${agentArgs} ${codexArgs} '${escapedPrompt}'`; } case "opencode": { // OpenCode CLI run command for non-interactive autonomous operation const opencodeArgs = process.env.PAR5_OPENCODE_ARGS || ""; return `opencode run ${agentArgs} ${opencodeArgs} '${escapedPrompt}'`; } default: throw new Error(`Unknown agent: ${agentName}`); } }; for (let i = 0; i < items.length; i++) { const item = items[i]; // Replace {{item}} with the actual item value const expandedPrompt = prompt.replace(/\{\{item\}\}/g, item); const safeFilename = toSafeFilename(item); const stdoutFile = join(runDir, `${safeFilename}.stdout.txt`); const stderrFile = join(runDir, `${safeFilename}.stderr.txt`); tasks.push({ command: getAgentCommand(agent, expandedPrompt), stdoutFile, stderrFile, }); results.push({ item, files: { stdout: stdoutFile, stderr: stderrFile }, }); } // Run agents in batches of 10 await runInBatches(tasks); // Build prose response const fileList = results .map( (r) => `- ${r.item}: stdout at "${r.files.stdout}", stderr at "${r.files.stderr}"`, ) .join("\n"); const agentNames: Record<string, string> = { claude: "Claude Code", gemini: "Google Gemini", codex: "OpenAI Codex", opencode: "OpenCode", }; const numBatches = Math.ceil(items.length / BATCH_SIZE); return { content: [ { type: "text", text: `Completed ${results.length} ${agentNames[agent]} agents in ${numBatches} batch(es) of up to ${BATCH_SIZE} parallel agents each. Output has been streamed to files. OUTPUT FILES: ${fileList} NEXT STEPS: 1. Read the stdout files to check the results from each agent 2. If there are errors, check the corresponding stderr files for details All agents have completed and output files are ready to read.`, }, ], }; }, ); }
  • src/index.ts:589-772 (registration)
    Conditional registration block: only registers the tool if at least one agent (claude, gemini, codex, opencode) is enabled (not disabled via PAR5_DISABLE_* env vars).
    // Tool: run_agent_across_list (only registered if at least one agent is enabled) if (ENABLED_AGENTS.length > 0) { const agentDescriptions: Record<string, string> = { claude: "claude: Claude Code CLI (uses --dangerously-skip-permissions for autonomous operation)", gemini: "gemini: Google Gemini CLI (uses --yolo for auto-accept)", codex: "codex: OpenAI Codex CLI (uses --dangerously-bypass-approvals-and-sandbox for autonomous operation)", opencode: "opencode: OpenCode CLI (uses run command for non-interactive autonomous operation)", }; const availableAgentsDoc = ENABLED_AGENTS.map( (a) => `- ${agentDescriptions[a]}`, ).join("\n"); server.registerTool( "run_agent_across_list", { description: `Spawns an AI coding agent for each item in a previously created list. Agents run in batches of ${BATCH_SIZE} parallel processes with automatic permission skipping enabled. WHEN TO USE: - Performing complex code analysis, refactoring, or generation across multiple files - Tasks that require AI reasoning rather than simple shell commands - When you need to delegate work to multiple AI agents working in parallel AVAILABLE AGENTS: ${availableAgentsDoc} HOW IT WORKS: 1. Each item in the list is substituted into the prompt where {{item}} appears 2. Agents run in batches of ${BATCH_SIZE} at a time to avoid overwhelming the system 3. Output streams directly to files as the agents work 4. This tool waits for all agents to complete before returning AFTER COMPLETION: - Read the stdout files to check the results from each agent - Check stderr files if you encounter errors - Files are named based on the item (e.g., "myfile.ts.stdout.txt") VARIABLE SUBSTITUTION: - Use {{item}} in your prompt - it will be replaced with each list item - Example: "Review {{item}} for bugs" becomes "Review src/file.ts for bugs" for item "src/file.ts"`, inputSchema: { list_id: z .string() .describe( "The list ID returned by create_list. This identifies which list of items to iterate over.", ), agent: z .enum(ENABLED_AGENTS as unknown as [string, ...string[]]) .describe( `Which AI agent to use: ${ENABLED_AGENTS.map((a) => `'${a}'`).join(", ")}. All agents run with permission-skipping flags for autonomous operation.`, ), prompt: z .string() .describe( "The prompt to send to each agent. Use {{item}} as a placeholder - it will be replaced with the current item value. Example: 'Review {{item}} and suggest improvements' or 'Add error handling to {{item}}'", ), }, }, async ({ list_id, agent, prompt }) => { const items = lists.get(list_id); if (!items) { return { content: [ { type: "text", text: `Error: No list found with ID "${list_id}". Please call create_list first to create a list of items, then use the returned ID with this tool.`, }, ], isError: true, }; } // Create output directory const runId = randomUUID(); const runDir = join(outputDir, runId); await mkdir(runDir, { recursive: true }); const results: Array<{ item: string; files: OutputFiles }> = []; const tasks: Array<{ command: string; stdoutFile: string; stderrFile: string; }> = []; // Build the agent command with skip permission flags and streaming output // Additional args can be passed via PAR5_AGENT_ARGS (all agents) or PAR5_CLAUDE_ARGS, PAR5_GEMINI_ARGS, PAR5_CODEX_ARGS (per-agent) const getAgentCommand = ( agentName: string, expandedPrompt: string, ): string => { const escapedPrompt = expandedPrompt.replace(/'/g, "'\\''"); const agentArgs = process.env.PAR5_AGENT_ARGS || ""; switch (agentName) { case "claude": { // Claude Code CLI with --dangerously-skip-permissions and streaming output const claudeArgs = process.env.PAR5_CLAUDE_ARGS || ""; return `claude --dangerously-skip-permissions --output-format text --verbose ${agentArgs} ${claudeArgs} -p '${escapedPrompt}'`; } case "gemini": { // Gemini CLI with yolo mode and streaming JSON output const geminiArgs = process.env.PAR5_GEMINI_ARGS || ""; return `gemini --yolo --output-format text ${agentArgs} ${geminiArgs} '${escapedPrompt}'`; } case "codex": { // Codex CLI exec subcommand with full-auto flag and JSON streaming output const codexArgs = process.env.PAR5_CODEX_ARGS || ""; return `codex exec --dangerously-bypass-approvals-and-sandbox ${agentArgs} ${codexArgs} '${escapedPrompt}'`; } case "opencode": { // OpenCode CLI run command for non-interactive autonomous operation const opencodeArgs = process.env.PAR5_OPENCODE_ARGS || ""; return `opencode run ${agentArgs} ${opencodeArgs} '${escapedPrompt}'`; } default: throw new Error(`Unknown agent: ${agentName}`); } }; for (let i = 0; i < items.length; i++) { const item = items[i]; // Replace {{item}} with the actual item value const expandedPrompt = prompt.replace(/\{\{item\}\}/g, item); const safeFilename = toSafeFilename(item); const stdoutFile = join(runDir, `${safeFilename}.stdout.txt`); const stderrFile = join(runDir, `${safeFilename}.stderr.txt`); tasks.push({ command: getAgentCommand(agent, expandedPrompt), stdoutFile, stderrFile, }); results.push({ item, files: { stdout: stdoutFile, stderr: stderrFile }, }); } // Run agents in batches of 10 await runInBatches(tasks); // Build prose response const fileList = results .map( (r) => `- ${r.item}: stdout at "${r.files.stdout}", stderr at "${r.files.stderr}"`, ) .join("\n"); const agentNames: Record<string, string> = { claude: "Claude Code", gemini: "Google Gemini", codex: "OpenAI Codex", opencode: "OpenCode", }; const numBatches = Math.ceil(items.length / BATCH_SIZE); return { content: [ { type: "text", text: `Completed ${results.length} ${agentNames[agent]} agents in ${numBatches} batch(es) of up to ${BATCH_SIZE} parallel agents each. Output has been streamed to files. OUTPUT FILES: ${fileList} NEXT STEPS: 1. Read the stdout files to check the results from each agent 2. If there are errors, check the corresponding stderr files for details All agents have completed and output files are ready to read.`, }, ], }; }, ); }
  • Shared helper function used by the handler to execute command tasks in configurable batches (default 10) using Promise.all for parallelism.
    // Helper to run commands in batches async function runInBatches( tasks: Array<{ command: string; stdoutFile: string; stderrFile: string; }>, ): Promise<void> { for (let i = 0; i < tasks.length; i += BATCH_SIZE) { const batch = tasks.slice(i, i + BATCH_SIZE); await Promise.all( batch.map((task) => runCommandToFiles(task.command, task.stdoutFile, task.stderrFile), ), ); } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/jrandolf/par5-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server