Skip to main content
Glama

run_shell_across_list

Execute shell commands across multiple items in batches, substituting $item in each command and streaming output to separate files for parallel batch processing.

Instructions

Executes a shell command for each item in a previously created list. Commands run in batches of 10 parallel processes, with stdout and stderr streamed to separate files.

WHEN TO USE:

  • Running the same shell command across multiple files (e.g., linting, formatting, compiling)

  • Batch processing with command-line tools

  • Any operation where you need to execute shell commands on a collection of items

HOW IT WORKS:

  1. Each item in the list is substituted into the command where $item appears

  2. Commands run in batches of 10 at a time to avoid overwhelming the system

  3. Output streams directly to files as the commands execute

  4. This tool waits for all commands to complete before returning

AFTER COMPLETION:

  • Read the stdout files to check results

  • Check stderr files if you encounter errors or unexpected output

  • Files are named based on the item (e.g., "myfile.ts.stdout.txt")

VARIABLE SUBSTITUTION:

  • Use $item in your command - it will be replaced with each list item (properly shell-escaped)

  • Example: "cat $item" becomes "cat 'src/file.ts'" for item "src/file.ts"

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
list_idYesThe list ID returned by create_list. This identifies which list of items to iterate over.
commandYesShell command to execute for each item. Use $item as a placeholder - it will be replaced with the current item value (properly escaped). Example: 'wc -l $item' or 'cat $item | grep TODO'

Implementation Reference

  • The handler function for 'run_shell_across_list' that retrieves the list, substitutes $item in the command for each item, sets up output files, runs the commands in batches using runInBatches, and returns a summary with file locations.
    	async ({ list_id, command }) => {
    		const items = lists.get(list_id);
    		if (!items) {
    			return {
    				content: [
    					{
    						type: "text",
    						text: `Error: No list found with ID "${list_id}". Please call create_list first to create a list of items, then use the returned ID with this tool.`,
    					},
    				],
    				isError: true,
    			};
    		}
    
    		// Create output directory
    		const runId = randomUUID();
    		const runDir = join(outputDir, runId);
    		await mkdir(runDir, { recursive: true });
    
    		const results: Array<{ item: string; files: OutputFiles }> = [];
    		const tasks: Array<{
    			command: string;
    			stdoutFile: string;
    			stderrFile: string;
    		}> = [];
    
    		for (let i = 0; i < items.length; i++) {
    			const item = items[i];
    			// Replace $item with the actual item value (properly escaped)
    			const escapedItem = item.replace(/'/g, "'\\''");
    			const expandedCommand = command.replace(/\$item/g, `'${escapedItem}'`);
    
    			const safeFilename = toSafeFilename(item);
    			const stdoutFile = join(runDir, `${safeFilename}.stdout.txt`);
    			const stderrFile = join(runDir, `${safeFilename}.stderr.txt`);
    
    			tasks.push({
    				command: expandedCommand,
    				stdoutFile,
    				stderrFile,
    			});
    
    			results.push({
    				item,
    				files: { stdout: stdoutFile, stderr: stderrFile },
    			});
    		}
    
    		// Run commands in batches of 10
    		await runInBatches(tasks);
    
    		// Build prose response
    		const fileList = results
    			.map(
    				(r) =>
    					`- ${r.item}: stdout at "${r.files.stdout}", stderr at "${r.files.stderr}"`,
    			)
    			.join("\n");
    
    		const numBatches = Math.ceil(items.length / BATCH_SIZE);
    
    		return {
    			content: [
    				{
    					type: "text",
    					text: `Completed ${results.length} shell commands in ${numBatches} batch(es) of up to ${BATCH_SIZE} parallel commands each. Output has been streamed to files.
    
    OUTPUT FILES:
    ${fileList}
    
    NEXT STEPS:
    1. Read the stdout files to check the results of each command
    2. If there are errors, check the corresponding stderr files for details
    
    All commands have completed and output files are ready to read.`,
    				},
    			],
    		};
    	},
  • Zod input schema defining the required parameters: list_id (string) and command (string).
    inputSchema: {
    	list_id: z
    		.string()
    		.describe(
    			"The list ID returned by create_list. This identifies which list of items to iterate over.",
    		),
    	command: z
    		.string()
    		.describe(
    			"Shell command to execute for each item. Use $item as a placeholder - it will be replaced with the current item value (properly escaped). Example: 'wc -l $item' or 'cat $item | grep TODO'",
    		),
    },
  • src/index.ts:463-580 (registration)
    Registration of the 'run_shell_across_list' tool using server.registerTool, including description, inputSchema, and handler function.
    // Tool: run_shell_across_list
    server.registerTool(
    	"run_shell_across_list",
    	{
    		description: `Executes a shell command for each item in a previously created list. Commands run in batches of ${BATCH_SIZE} parallel processes, with stdout and stderr streamed to separate files.
    
    WHEN TO USE:
    - Running the same shell command across multiple files (e.g., linting, formatting, compiling)
    - Batch processing with command-line tools
    - Any operation where you need to execute shell commands on a collection of items
    
    HOW IT WORKS:
    1. Each item in the list is substituted into the command where $item appears
    2. Commands run in batches of ${BATCH_SIZE} at a time to avoid overwhelming the system
    3. Output streams directly to files as the commands execute
    4. This tool waits for all commands to complete before returning
    
    AFTER COMPLETION:
    - Read the stdout files to check results
    - Check stderr files if you encounter errors or unexpected output
    - Files are named based on the item (e.g., "myfile.ts.stdout.txt")
    
    VARIABLE SUBSTITUTION:
    - Use $item in your command - it will be replaced with each list item (properly shell-escaped)
    - Example: "cat $item" becomes "cat 'src/file.ts'" for item "src/file.ts"`,
    		inputSchema: {
    			list_id: z
    				.string()
    				.describe(
    					"The list ID returned by create_list. This identifies which list of items to iterate over.",
    				),
    			command: z
    				.string()
    				.describe(
    					"Shell command to execute for each item. Use $item as a placeholder - it will be replaced with the current item value (properly escaped). Example: 'wc -l $item' or 'cat $item | grep TODO'",
    				),
    		},
    	},
    	async ({ list_id, command }) => {
    		const items = lists.get(list_id);
    		if (!items) {
    			return {
    				content: [
    					{
    						type: "text",
    						text: `Error: No list found with ID "${list_id}". Please call create_list first to create a list of items, then use the returned ID with this tool.`,
    					},
    				],
    				isError: true,
    			};
    		}
    
    		// Create output directory
    		const runId = randomUUID();
    		const runDir = join(outputDir, runId);
    		await mkdir(runDir, { recursive: true });
    
    		const results: Array<{ item: string; files: OutputFiles }> = [];
    		const tasks: Array<{
    			command: string;
    			stdoutFile: string;
    			stderrFile: string;
    		}> = [];
    
    		for (let i = 0; i < items.length; i++) {
    			const item = items[i];
    			// Replace $item with the actual item value (properly escaped)
    			const escapedItem = item.replace(/'/g, "'\\''");
    			const expandedCommand = command.replace(/\$item/g, `'${escapedItem}'`);
    
    			const safeFilename = toSafeFilename(item);
    			const stdoutFile = join(runDir, `${safeFilename}.stdout.txt`);
    			const stderrFile = join(runDir, `${safeFilename}.stderr.txt`);
    
    			tasks.push({
    				command: expandedCommand,
    				stdoutFile,
    				stderrFile,
    			});
    
    			results.push({
    				item,
    				files: { stdout: stdoutFile, stderr: stderrFile },
    			});
    		}
    
    		// Run commands in batches of 10
    		await runInBatches(tasks);
    
    		// Build prose response
    		const fileList = results
    			.map(
    				(r) =>
    					`- ${r.item}: stdout at "${r.files.stdout}", stderr at "${r.files.stderr}"`,
    			)
    			.join("\n");
    
    		const numBatches = Math.ceil(items.length / BATCH_SIZE);
    
    		return {
    			content: [
    				{
    					type: "text",
    					text: `Completed ${results.length} shell commands in ${numBatches} batch(es) of up to ${BATCH_SIZE} parallel commands each. Output has been streamed to files.
    
    OUTPUT FILES:
    ${fileList}
    
    NEXT STEPS:
    1. Read the stdout files to check the results of each command
    2. If there are errors, check the corresponding stderr files for details
    
    All commands have completed and output files are ready to read.`,
    				},
    			],
    		};
    	},
    );
  • Helper function runInBatches that executes the command tasks in batches of BATCH_SIZE using Promise.all for parallelism, calling runCommandToFiles for each.
    async function runInBatches(
    	tasks: Array<{
    		command: string;
    		stdoutFile: string;
    		stderrFile: string;
    	}>,
    ): Promise<void> {
    	for (let i = 0; i < tasks.length; i += BATCH_SIZE) {
    		const batch = tasks.slice(i, i + BATCH_SIZE);
    		await Promise.all(
    			batch.map((task) =>
    				runCommandToFiles(task.command, task.stdoutFile, task.stderrFile),
    			),
    		);
    	}
    }
  • Helper function that spawns a shell command and streams stdout and stderr to separate files, tracking the process for cleanup.
    function runCommandToFiles(
    	command: string,
    	stdoutFile: string,
    	stderrFile: string,
    ): Promise<void> {
    	return new Promise((resolve) => {
    		(async () => {
    			const stdoutHandle = await open(stdoutFile, "w");
    			const stderrHandle = await open(stderrFile, "w");
    			const stdoutStream = stdoutHandle.createWriteStream();
    			const stderrStream = stderrHandle.createWriteStream();
    
    			const child = spawn("sh", ["-c", command], {
    				stdio: ["ignore", "pipe", "pipe"],
    			});
    
    			// Track the child process for cleanup on shutdown
    			activeProcesses.add(child);
    
    			child.stdout.pipe(stdoutStream);
    			child.stderr.pipe(stderrStream);
    
    			child.on("close", async () => {
    				activeProcesses.delete(child);
    				stdoutStream.end();
    				stderrStream.end();
    				await stdoutHandle.close();
    				await stderrHandle.close();
    				resolve();
    			});
    
    			child.on("error", async (err) => {
    				activeProcesses.delete(child);
    				stderrStream.write(`\nERROR: ${err.message}\n`);
    				stdoutStream.end();
    				stderrStream.end();
    				await stdoutHandle.close();
    				await stderrHandle.close();
    				resolve();
    			});
    		})();
    	});
    }
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and excels by disclosing key behavioral traits: it runs commands in batches of 10 parallel processes, streams output to files, waits for completion, and handles variable substitution with proper escaping. It also explains post-execution steps like reading stdout/stderr files, adding valuable context beyond basic functionality.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (e.g., WHEN TO USE, HOW IT WORKS) and front-loaded key information. While slightly verbose, each sentence earns its place by adding necessary details like batch size and file naming, making it efficient for understanding without waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (parallel execution, file output) and lack of annotations/output schema, the description is highly complete. It covers purpose, usage, behavior, parameters, and post-execution steps, providing all needed context for an AI agent to invoke it correctly without gaps.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the baseline is 3. The description adds significant value by explaining variable substitution with $item, providing examples (e.g., 'cat $item'), and detailing how items are shell-escaped, which clarifies semantics beyond the schema's basic parameter descriptions.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('executes a shell command for each item in a previously created list') and distinguishes it from siblings by specifying it operates on lists created by other tools (like create_list) and differs from run_agent_across_list. It explicitly mentions batch processing with parallel execution.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The 'WHEN TO USE' section provides explicit guidance on when to use this tool (e.g., running same command across multiple files, batch processing) and implies alternatives by referencing sibling tools like create_list for list creation. It clearly sets the context for usage with shell commands on collections.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/mathematic-inc/par5-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server