Skip to main content
Glama

prompt

Send prompts to CLI runners as background tasks to prevent timeouts during long operations. Returns a task ID for polling results.

Instructions

Send a prompt to a CLI runner as a background task.

Returns immediately with a task ID. Client polls for results. This prevents timeouts for long operations (YOLO mode: 2-5 minutes).

Args: cli: CLI runner name (e.g., "gemini") prompt: Prompt text to send to the runner context: Optional context metadata execution_mode: 'default' (safe) or 'yolo'. None inherits session preference. model: Optional model name. None inherits session preference or uses CLI default. max_retries: Max retry attempts for transient errors (None inherits session preference). output_limit: Max output bytes (None inherits session preference or uses env default). timeout: Subprocess timeout seconds (None inherits session preference or uses env default). retry_base_delay: Base delay seconds for exponential backoff (None inherits session/config). retry_max_delay: Backoff ceiling in seconds (None inherits session preference or config). ctx: MCP context (auto-injected by FastMCP). None when called directly in tests.

Returns: Runner's response text

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
cliYes
promptYes
contextNo
execution_modeNo
modelNo
max_retriesNo
output_limitNo
timeoutNo
retry_base_delayNo
retry_max_delayNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well. It discloses key behavioral traits: asynchronous execution (background task, immediate return with task ID, client polling), timeout prevention for long operations, execution modes ('default' vs 'yolo'), and inheritance behaviors for optional parameters. It doesn't mention error handling beyond retries or security implications.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with a clear purpose statement upfront, followed by detailed parameter explanations. Every sentence adds value, though it's moderately long due to many parameters. The 'Args:' and 'Returns:' sections are organized but could be more concise in phrasing some inheritance explanations.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (10 parameters, asynchronous behavior, no annotations) and the presence of an output schema (implied by 'Returns: Runner's response text'), the description is highly complete. It covers purpose, usage, parameters, and behavioral context thoroughly, leaving little ambiguity for an AI agent to invoke it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate fully—and it does. It provides semantic explanations for all 10 parameters beyond their schema types, including examples ('gemini'), usage notes (inheritance behaviors, 'safe' vs 'yolo'), and practical context (e.g., 'max output bytes', 'subprocess timeout seconds'). This adds significant value over the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Send a prompt to a CLI runner as a background task.' It specifies the verb ('send'), resource ('prompt'), and mechanism ('background task'), distinguishing it from sibling tools like batch_prompt (which likely handles multiple prompts) and preference management tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context about when to use this tool: for long operations (2-5 minutes in 'YOLO mode') to prevent timeouts via background task execution. It mentions 'Returns immediately with a task ID. Client polls for results,' which guides usage patterns. However, it doesn't explicitly contrast when to use this versus batch_prompt or other siblings.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/j7an/nexus-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server