Skip to main content
Glama

run

Starts AI agents as background processes for file ops, code analysis, git workflows, web search, and more. Returns a PID to monitor progress.

Instructions

AI Agent Runner: Starts a Claude, Codex, Gemini, Forge, or OpenCode CLI process in the background and returns a PID immediately. Use list_processes and get_result to monitor progress.

• File ops: Create, read, (fuzzy) edit, move, copy, delete, list files, analyze/ocr images, file content analysis • Code: Generate / analyse / refactor / fix • Git: Stage ▸ commit ▸ push ▸ tag (any workflow) • Terminal: Run any CLI cmd or open URLs • Web search + summarise content on-the-fly • Multi-step workflows & GitHub integration

IMPORTANT: This tool now returns immediately with a PID. Use other tools to check status and get results.

Supported models: "claude-ultra", "codex-ultra", "gemini-ultra", "sonnet", "sonnet[1m]", "opus", "opusplan", "haiku", "gpt-5.4", "gpt-5.5", "gpt-5.4-mini", "gpt-5.3-codex", "gpt-5.3-codex-spark", "gpt-5.2", "gemini-2.5-pro", "gemini-2.5-flash", "gemini-3.1-pro-preview", "gemini-3-pro-preview", "gemini-3-flash-preview", "forge", "opencode", "oc-<provider/model>"

Prompt input: You must provide EITHER prompt (string) OR prompt_file (file path), but not both.

Prompt tips

  1. Be concise, explicit & step-by-step for complex tasks.

  2. Check process status with list_processes

  3. Get results with get_result using the returned PID

  4. Kill long-running processes with kill_process if needed

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptNoThe detailed natural language prompt for the agent to execute. Either this or prompt_file is required.
prompt_fileNoPath to a file containing the prompt. Either this or prompt is required. Must be an absolute path or relative to workFolder.
workFolderYesThe working directory for the agent execution. Must be an absolute path.
modelNoThe model to use. Aliases: "claude-ultra" (auto max effort), "codex-ultra" (auto xhigh reasoning), "gemini-ultra". Standard: "sonnet", "sonnet[1m]", "opus", "opusplan", "haiku", "gpt-5.4", "gpt-5.5", "gpt-5.4-mini", "gpt-5.3-codex", "gpt-5.3-codex-spark", "gpt-5.2", "gemini-2.5-pro", "gemini-2.5-flash", "gemini-3.1-pro-preview", "gemini-3-pro-preview", "gemini-3-flash-preview", "forge", "opencode". OpenCode also accepts explicit dynamic models using "oc-<provider/model>". "forge" is a provider key, not a Forge model family selector.
reasoning_effortNoReasoning control for Claude and Codex. Claude uses --effort with "low", "medium", "high", "xhigh", "max". Codex uses model_reasoning_effort with "low", "medium", "high", "xhigh". Gemini, Forge, and OpenCode do not support reasoning_effort in this integration.
session_idNoOptional session ID to resume a previous session. Supported for Claude, Codex, Gemini, Forge, and OpenCode. OpenCode resumes in-place via --session and may also be combined with explicit oc-<provider/model> selection.

Implementation Reference

  • src/app/mcp.ts:141-310 (registration)
    The tool 'run' is registered in setupToolHandlers() with its name, description, and inputSchema. The inputSchema specifies optional parameters: prompt, prompt_file, workFolder, model, reasoning_effort, session_id, with workFolder as required.
        this.server.setRequestHandler(ListToolsRequestSchema, async () => ({
          tools: [
            {
              name: 'run',
              description: `AI Agent Runner: Starts a Claude, Codex, Gemini, Forge, or OpenCode CLI process in the background and returns a PID immediately. Use list_processes and get_result to monitor progress.
    
    • File ops: Create, read, (fuzzy) edit, move, copy, delete, list files, analyze/ocr images, file content analysis
    • Code: Generate / analyse / refactor / fix
    • Git: Stage ▸ commit ▸ push ▸ tag (any workflow)
    • Terminal: Run any CLI cmd or open URLs
    • Web search + summarise content on-the-fly
    • Multi-step workflows & GitHub integration
    
    **IMPORTANT**: This tool now returns immediately with a PID. Use other tools to check status and get results.
    
    **Supported models**:
    ${getSupportedModelsDescription()}
    
    **Prompt input**: You must provide EITHER prompt (string) OR prompt_file (file path), but not both.
    
    **Prompt tips**
    1. Be concise, explicit & step-by-step for complex tasks.
    2. Check process status with list_processes
    3. Get results with get_result using the returned PID
    4. Kill long-running processes with kill_process if needed
    
            `,
              inputSchema: {
                type: 'object',
                properties: {
                  prompt: {
                    type: 'string',
                    description: 'The detailed natural language prompt for the agent to execute. Either this or prompt_file is required.',
                  },
                  prompt_file: {
                    type: 'string',
                    description: 'Path to a file containing the prompt. Either this or prompt is required. Must be an absolute path or relative to workFolder.',
                  },
                  workFolder: {
                    type: 'string',
                    description: 'The working directory for the agent execution. Must be an absolute path.',
                  },
                  model: {
                    type: 'string',
                    description: getModelParameterDescription(),
                  },
                  reasoning_effort: {
                    type: 'string',
                    description: 'Reasoning control for Claude and Codex. Claude uses --effort with "low", "medium", "high", "xhigh", "max". Codex uses model_reasoning_effort with "low", "medium", "high", "xhigh". Gemini, Forge, and OpenCode do not support reasoning_effort in this integration.',
                  },
                  session_id: {
                    type: 'string',
                    description: 'Optional session ID to resume a previous session. Supported for Claude, Codex, Gemini, Forge, and OpenCode. OpenCode resumes in-place via --session and may also be combined with explicit oc-<provider/model> selection.',
                  },
                },
                required: ['workFolder'],
              },
            },
            {
              name: 'list_processes',
              description: 'List all running and completed AI agent processes. Returns a simple list with PID, agent type, and status for each process.',
              inputSchema: {
                type: 'object',
                properties: {},
              },
            },
            {
              name: 'get_result',
              description: 'Get the current output and status of an AI agent process by PID. Defaults to a compact result shape; set verbose to true for full metadata and detailed parsed output.',
              inputSchema: {
                type: 'object',
                properties: {
                  pid: {
                    type: 'number',
                    description: 'The process ID returned by run tool.',
                  },
                  verbose: {
                    type: 'boolean',
                    description: 'Optional: If true, returns the full result shape including metadata fields and detailed parsed output such as tool usage history. Defaults to false.',
                  }
                },
                required: ['pid'],
              },
            },
            {
              name: 'wait',
              description: 'Wait for multiple AI agent processes to complete and return their results. Defaults to compact result items; set verbose to true for full metadata and detailed parsed output.',
              inputSchema: {
                type: 'object',
                properties: {
                  pids: {
                    type: 'array',
                    items: { type: 'number' },
                    description: 'List of process IDs to wait for (returned by the run tool).',
                  },
                  timeout: {
                    type: 'number',
                    description: 'Optional: Maximum time to wait in seconds. Defaults to 180 (3 minutes).',
                  },
                  verbose: {
                    type: 'boolean',
                    description: 'Optional: If true, each result item uses the full result shape including metadata fields and detailed parsed output. Defaults to false.',
                  },
                },
                required: ['pids'],
              },
            },
            {
              name: 'peek',
              description: 'One-shot short observation window for running child agents. Returns only natural-language message events, and optionally normalized tool_call events, observed during this call; not a history API, not gapless streaming, and not stdout/stderr tailing. In v1, message extraction is supported for Codex, Claude, OpenCode, Gemini, and best-effort Forge Summary/Completed successfully lines. Forge tool calls are low-precision Execute/Finished markers and never include command output. Tool calls exclude raw tool output.',
              inputSchema: {
                type: 'object',
                properties: {
                  pids: {
                    type: 'array',
                    items: { type: 'number' },
                    description: 'Process IDs returned by run. Duplicates are deduplicated server-side, preserving first occurrence order. Unknown PIDs are returned per process as not_found.',
                  },
                  peek_time_sec: {
                    type: 'number',
                    description: 'Optional positive integer observation window in seconds. Defaults to 10; maximum is 60.',
                  },
                  include_tool_calls: {
                    type: 'boolean',
                    description: 'Optional: include normalized tool_call events without raw tool output. Defaults to false.',
                  },
                },
                required: ['pids'],
              },
            },
            {
              name: 'kill_process',
              description: 'Terminate a running AI agent process by PID.',
              inputSchema: {
                type: 'object',
                properties: {
                  pid: {
                    type: 'number',
                    description: 'The process ID to terminate.',
                  },
                },
                required: ['pid'],
              },
            },
            {
              name: 'cleanup_processes',
              description: 'Remove all completed and failed processes from the process list to free up memory.',
              inputSchema: {
                type: 'object',
                properties: {},
              },
            },
            {
              name: 'doctor',
              description: 'Check supported AI CLI binary availability and path resolution. Does not verify login state or terms acceptance.',
              inputSchema: {
                type: 'object',
                properties: {},
              },
            },
            {
              name: 'models',
              description: 'List supported model names, model aliases, and dynamic backend discovery hints.',
              inputSchema: {
                type: 'object',
                properties: {},
              },
            }
          ],
        }));
  • handleRun is the main handler for the 'run' tool. It delegates to ProcessService.startProcess() with prompt, prompt_file, workFolder, model, session_id, and reasoning_effort parameters, and returns a JSON result with the PID.
    private async handleRun(toolArguments: any): Promise<ServerResult> {
      if (isFirstToolUse) {
        console.error(`ai_cli_mcp v${SERVER_VERSION} started at ${serverStartupTime}`);
        isFirstToolUse = false;
      }
    
      const cliConfigurationError = this.getCliConfigurationError();
      if (cliConfigurationError) {
        throw new McpError(ErrorCode.InvalidParams, cliConfigurationError);
      }
    
      try {
        const result = this.processService.startProcess({
          prompt: toolArguments.prompt,
          prompt_file: toolArguments.prompt_file,
          workFolder: toolArguments.workFolder,
          model: toolArguments.model,
          session_id: toolArguments.session_id,
          reasoning_effort: toolArguments.reasoning_effort,
        });
        return {
          content: [{
            type: 'text',
            text: JSON.stringify(result, null, 2)
          }]
        };
      } catch (error: any) {
        const code = /Failed to start/.test(error.message) ? ErrorCode.InternalError : ErrorCode.InvalidParams;
        throw new McpError(code, error.message);
      }
    }
  • inputSchema for the 'run' tool defines the JSON Schema object with properties: prompt (string), prompt_file (string), workFolder (string, required), model (string), reasoning_effort (string), session_id (string).
    inputSchema: {
      type: 'object',
      properties: {
        prompt: {
          type: 'string',
          description: 'The detailed natural language prompt for the agent to execute. Either this or prompt_file is required.',
        },
        prompt_file: {
          type: 'string',
          description: 'Path to a file containing the prompt. Either this or prompt is required. Must be an absolute path or relative to workFolder.',
        },
        workFolder: {
          type: 'string',
          description: 'The working directory for the agent execution. Must be an absolute path.',
        },
        model: {
          type: 'string',
          description: getModelParameterDescription(),
        },
        reasoning_effort: {
          type: 'string',
          description: 'Reasoning control for Claude and Codex. Claude uses --effort with "low", "medium", "high", "xhigh", "max". Codex uses model_reasoning_effort with "low", "medium", "high", "xhigh". Gemini, Forge, and OpenCode do not support reasoning_effort in this integration.',
        },
        session_id: {
          type: 'string',
          description: 'Optional session ID to resume a previous session. Supported for Claude, Codex, Gemini, Forge, and OpenCode. OpenCode resumes in-place via --session and may also be combined with explicit oc-<provider/model> selection.',
        },
      },
      required: ['workFolder'],
    },
  • startProcess is the core execution logic. It calls buildCliCommand() to resolve agent type, CLI path, and arguments, then spawns the child process, captures stdout/stderr, tracks the process by PID, and returns a StartProcessResult.
    startProcess(options: Omit<BuildCliCommandOptions, 'cliPaths'>): StartProcessResult {
      const cmd = buildCliCommand({
        ...options,
        cliPaths: this.cliPaths,
      });
    
      const { cliPath, args: processArgs, cwd: effectiveCwd, agent, prompt } = cmd;
      const childProcess = spawn(cliPath, processArgs, {
        cwd: effectiveCwd,
        stdio: ['ignore', 'pipe', 'pipe'],
        detached: false,
      });
    
      const pid = childProcess.pid;
      if (!pid) {
        throw new Error(`Failed to start ${agent} CLI process`);
      }
    
      const processEntry: TrackedProcess = {
        pid,
        process: childProcess,
        prompt,
        workFolder: effectiveCwd,
        model: options.model,
        toolType: agent,
        startTime: new Date().toISOString(),
        stdout: '',
        stderr: '',
        status: 'running',
      };
    
      this.processManager.set(pid, processEntry);
    
      childProcess.stdout.on('data', (data) => {
        const entry = this.processManager.get(pid);
        if (entry) {
          entry.stdout += data.toString();
        }
      });
    
      childProcess.stderr.on('data', (data) => {
        const entry = this.processManager.get(pid);
        if (entry) {
          entry.stderr += data.toString();
        }
      });
    
      childProcess.on('close', (code) => {
        const entry = this.processManager.get(pid);
        if (entry) {
          entry.status = code === 0 ? 'completed' : 'failed';
          entry.exitCode = code !== null ? code : undefined;
        }
      });
    
      childProcess.on('error', (error) => {
        const entry = this.processManager.get(pid);
        if (entry) {
          entry.status = 'failed';
          entry.stderr += `\nProcess error: ${error.message}`;
        }
      });
    
      return {
        pid,
        status: 'started',
        agent,
        message: `${agent} process started successfully`,
      };
    }
  • buildCliCommand resolves the agent type (claude, codex, gemini, forge, opencode) based on the model parameter, validates prompt/prompt_file, constructs CLI arguments with appropriate flags for each agent type (e.g., --dangerously-skip-permissions for Claude, --skip-git-repo-check for Codex).
    export function buildCliCommand(options: BuildCliCommandOptions): CliCommand {
      if (!options.workFolder || typeof options.workFolder !== 'string') {
        throw new Error('Missing or invalid required parameter: workFolder');
      }
    
      const hasPrompt = !!options.prompt && typeof options.prompt === 'string' && options.prompt.trim() !== '';
      const hasPromptFile = !!options.prompt_file && typeof options.prompt_file === 'string' && options.prompt_file.trim() !== '';
    
      if (!hasPrompt && !hasPromptFile) {
        throw new Error('Either prompt or prompt_file must be provided');
      }
    
      if (hasPrompt && hasPromptFile) {
        throw new Error('Cannot specify both prompt and prompt_file. Please use only one.');
      }
    
      let prompt: string;
      if (hasPrompt) {
        prompt = options.prompt!;
      } else {
        const promptFilePath = isAbsolute(options.prompt_file!)
          ? options.prompt_file!
          : pathResolve(options.workFolder, options.prompt_file!);
    
        if (!existsSync(promptFilePath)) {
          throw new Error(`Prompt file does not exist: ${promptFilePath}`);
        }
    
        try {
          prompt = readFileSync(promptFilePath, 'utf-8');
        } catch (error: any) {
          throw new Error(`Failed to read prompt file: ${error.message}`);
        }
      }
    
      const cwd = pathResolve(options.workFolder);
      if (!existsSync(cwd)) {
        throw new Error(`Working folder does not exist: ${options.workFolder}`);
      }
    
      const rawModel = options.model || '';
      const { agent, resolvedModel, openCodeModel } = resolveModelSelection(rawModel);
    
      let reasoningEffortArg: string | undefined = options.reasoning_effort;
      if (!reasoningEffortArg) {
        if (rawModel === 'codex-ultra') {
          reasoningEffortArg = 'xhigh';
        } else if (rawModel === 'claude-ultra') {
          reasoningEffortArg = 'max';
        }
      }
    
      const reasoningTargetModel = rawModel === 'opencode' || rawModel.startsWith('oc-')
        ? rawModel
        : (resolvedModel || rawModel);
      const reasoningEffort = getReasoningEffort(reasoningTargetModel, reasoningEffortArg);
    
      let cliPath: string;
      let args: string[];
    
      if (agent === 'codex') {
        cliPath = options.cliPaths.codex;
    
        if (options.session_id && typeof options.session_id === 'string') {
          args = ['exec', 'resume', options.session_id];
        } else {
          args = ['exec'];
        }
    
        if (reasoningEffort) {
          args.push('-c', `model_reasoning_effort=${reasoningEffort}`);
        }
        if (resolvedModel && resolvedModel !== 'codex') {
          args.push('--model', resolvedModel);
        }
    
        args.push('--skip-git-repo-check', '--dangerously-bypass-approvals-and-sandbox', '--json', prompt);
      } else if (agent === 'gemini') {
        cliPath = options.cliPaths.gemini;
        args = ['-y', '--output-format', 'stream-json'];
    
        if (options.session_id && typeof options.session_id === 'string') {
          args.push('-r', options.session_id);
        }
    
        if (resolvedModel) {
          args.push('--model', resolvedModel);
        }
    
        args.push(prompt);
      } else if (agent === 'forge') {
        cliPath = options.cliPaths.forge;
        args = ['-C', cwd];
    
        if (options.session_id && typeof options.session_id === 'string') {
          args.push('--conversation-id', options.session_id);
        }
    
        args.push('-p', prompt);
      } else if (agent === 'opencode') {
        cliPath = options.cliPaths.opencode;
        args = ['run', '--format', 'json', '--dir', cwd];
    
        if (options.session_id && typeof options.session_id === 'string') {
          args.push('--session', options.session_id);
        }
    
        if (openCodeModel) {
          args.push('--model', openCodeModel);
        }
    
        args.push(prompt);
      } else {
        cliPath = options.cliPaths.claude;
        args = ['--dangerously-skip-permissions', '--output-format', 'stream-json', '--verbose'];
    
        if (options.session_id && typeof options.session_id === 'string') {
          args.push('-r', options.session_id, '--fork-session');
        }
    
        if (reasoningEffort) {
          args.push('--effort', reasoningEffort);
        }
    
        args.push('-p', prompt);
        if (resolvedModel) {
          args.push('--model', resolvedModel);
        }
      }
    
      return { cliPath, args, cwd, agent, prompt, resolvedModel };
    }
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Despite no annotations, the description discloses that the tool returns immediately with a PID and requires other tools for results. It lists supported models, prompt requirements, and tips. Could be more specific about error states or lifecycle, but adequate.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Well-structured with clear sections (purpose, capabilities, behavior note, models, input requirement, tips). Some redundancy in model list (duplicated in schema and description) and bullet points make it longer than necessary, but front-loaded with key info.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

No output schema exists, but description explains return value (PID) and usage workflow. Covers capabilities (file ops, code, git, terminal, web search). Sufficiently complete for a process runner tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema descriptions cover 100% of parameters. The description adds mutual exclusivity of prompt and prompt_file, model alias meanings, and prompt tips. This adds value beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it starts an AI agent CLI process in the background and returns a PID immediately. It lists supported models and distinguishes from sibling tools like list_processes, get_result, and kill_process.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly instructs to use list_processes and get_result for monitoring, and provides prompt tips for complex tasks, status checking, retrieving results, and killing processes. Clearly contrasts with sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/lailai258/agent-bridge-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server