Skip to main content
Glama

run_agent

Delegate complex multi-step tasks to autonomous agents for independent execution with dedicated context, maintaining conversation continuity across sessions.

Instructions

Delegate complex, multi-step, or specialized tasks to an autonomous agent for independent execution with dedicated context (e.g., refactoring across multiple files, fixing all test failures, systematic codebase analysis, batch operations). Returns session_id in response metadata - reuse it in subsequent calls to maintain conversation context continuity across multiple agent executions.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
agentYesAgent name exactly as listed in list_agents resource.
promptYesUser's direct request content. Agent context is separately provided via agent parameter.
cwdYesWorking directory path for agent execution context. Must be an absolute path to a valid directory.
extra_argsNoAdditional configuration parameters for agent execution (optional)
session_idNoSession ID for continuing previous conversation context (optional). If omitted, a new session will be auto-generated and returned in response metadata. Reuse the returned session_id in subsequent calls to maintain context continuity.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden and does well by disclosing key behavioral traits: it's a delegation tool that returns a session_id for maintaining conversation context across executions. It explains the autonomous execution nature and the importance of reusing session_id, though it could mention potential side effects like resource consumption or execution time.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is efficiently structured in two sentences: the first states the purpose and examples, the second explains the return value and context continuity. Every sentence adds value with zero waste, making it easy to parse and understand quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of a delegation tool with no annotations and no output schema, the description does well by explaining the tool's purpose, usage context, and key behavioral aspects like session continuity. It could be more complete by detailing output format or error handling, but it covers the essentials adequately for an agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents all parameters thoroughly. The description adds some context by mentioning session_id reuse for continuity, but doesn't provide additional meaning beyond what's in the schema descriptions (e.g., how 'agent' relates to 'list_agents', or practical examples for 'extra_args'). Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose with specific verbs ('delegate', 'returns') and resources ('autonomous agent', 'session_id'), and provides concrete examples of tasks (refactoring, fixing test failures, analysis, batch operations). It distinguishes this as a delegation tool for complex multi-step tasks, which is unambiguous even without sibling tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description explicitly states when to use this tool ('complex, multi-step, or specialized tasks') with helpful examples, and mentions context continuity via session_id. However, it doesn't specify when NOT to use it or alternatives, and with no sibling tools, this limitation is acceptable but prevents a perfect score.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/prefrontal-systems/sub-agents-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server