Skip to main content
Glama

manus_task_confirm_action

Confirm pending actions when agent status is 'waiting' and not awaiting user message. Provide event_id and input matching the event's confirm_input_schema (e.g., {'accept': true}).

Instructions

Confirm a pending action. Use when a status_update event reports agent_status='waiting' with waiting_for_event_type != 'messageAskUser'. Pass event_id from the event and an input dict matching the event's confirm_input_schema (e.g. {'accept': true} for most events, {'action': 'select', 'client_id': '...'} for needConnectMyBrowser).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
task_idYes
event_idYes
inputNo
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Clearly states the tool's action and input requirements, and the behavioral context (confirmation of pending action) is inferred. However, it does not explicitly mention side effects, idempotency, or auth needs, but the description is sufficient given the tool's simplicity.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Extremely concise with two sentences, the first immediately stating the core function. Every word adds value, and the structure is front-loaded.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Covers the trigger condition, input format, and examples. Does not describe output or error handling, but given the tool's simplicity and lack of output schema, this is adequate for the agent to use it correctly.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds meaning for the 'input' parameter with examples and schema reference, but does not explicitly describe 'task_id' and 'event_id' beyond implying they come from the event. Given zero schema descriptions, this partial coverage is adequate but not complete.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description opens with a clear verb+resource ('Confirm a pending action') and immediately specifies the exact triggering condition, distinguishing it from siblings like manus_task_send_message which handles 'messageAskUser' events.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly defines when to use ('when a status_update event reports agent_status='waiting' with waiting_for_event_type != 'messageAskUser'') and what input to pass, with concrete examples, effectively guiding the agent to choose this tool over alternatives.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/aruxojuyu665/Manus-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server