invoke_agent
Process complex queries requiring reasoning across multiple tools or conversational responses by invoking the full agent with natural language prompts.
Instructions
Invoke the full strands-mcp-cli agent with a natural language prompt. Use this for complex queries that require reasoning across multiple tools or when you need a conversational response from the agent.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | The prompt or query to send to the agent |
Implementation Reference
- strands_mcp_server/mcp_server.py:325-371 (handler)Handler logic within the MCP call_tool function that creates a fresh agent instance, invokes it with the provided prompt, and returns the response as TextContent.if name == "invoke_agent" and expose_agent: prompt = arguments.get("prompt") if not prompt: return [ types.TextContent( type="text", text="❌ Error: 'prompt' parameter is required", ) ] logger.debug(f"Invoking agent with prompt: {prompt[:100]}...") # Get the parent agent's configuration # Access tools directly from registry dictionary tools_for_invocation = [ agent.tool_registry.registry[tool_name] for tool_name in agent_tools.keys() if tool_name in agent.tool_registry.registry ] # Prepare extra kwargs for observability and callbacks extra_kwargs = {} if hasattr(agent, "callback_handler") and agent.callback_handler: extra_kwargs["callback_handler"] = agent.callback_handler # Create fresh agent with same configuration but clean message history # Inherits: model, tools, trace_attributes, callback_handler fresh_agent = Agent( name=f"{agent.name}-invocation", model=agent.model, messages=[], # Empty message history (clean state) tools=tools_for_invocation, system_prompt=agent.system_prompt if hasattr(agent, "system_prompt") else None, trace_attributes=agent.trace_attributes if hasattr(agent, "trace_attributes") else {}, **extra_kwargs, ) # Call the fresh agent result = fresh_agent(prompt) # Extract text response from agent result response_text = str(result) logger.debug(f"Agent invocation complete, response length: {len(response_text)}") return [types.TextContent(type="text", text=response_text)]
- Defines the MCP Tool schema for 'invoke_agent', including name, description, and input schema requiring a 'prompt' parameter.agent_invoke_tool = types.Tool( name="invoke_agent", description=( f"Invoke the full {agent.name} agent with a natural language prompt. " "Use this for complex queries that require reasoning across multiple tools " "or when you need a conversational response from the agent." ), inputSchema={ "type": "object", "properties": { "prompt": { "type": "string", "description": "The prompt or query to send to the agent", } }, "required": ["prompt"], }, ) mcp_tools.append(agent_invoke_tool)
- strands_mcp_server/mcp_server.py:298-308 (registration)Registers the list_tools handler that includes the 'invoke_agent' tool in the returned list of available tools when expose_agent is True.@server.list_tools() async def list_tools() -> list[types.Tool]: """Return list of available MCP tools. This handler is called when MCP clients request the available tools. It returns the pre-built list of MCP Tool objects converted from Strands agent tools. """ logger.debug(f"list_tools called, returning {len(mcp_tools)} tools") return mcp_tools