Skip to main content
Glama

invoke_agent

Process complex queries requiring reasoning across multiple tools or conversational responses by invoking the full agent with natural language prompts.

Instructions

Invoke the full strands-mcp-cli agent with a natural language prompt. Use this for complex queries that require reasoning across multiple tools or when you need a conversational response from the agent.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
promptYesThe prompt or query to send to the agent

Implementation Reference

  • Handler logic within the MCP call_tool function that creates a fresh agent instance, invokes it with the provided prompt, and returns the response as TextContent.
    if name == "invoke_agent" and expose_agent: prompt = arguments.get("prompt") if not prompt: return [ types.TextContent( type="text", text="❌ Error: 'prompt' parameter is required", ) ] logger.debug(f"Invoking agent with prompt: {prompt[:100]}...") # Get the parent agent's configuration # Access tools directly from registry dictionary tools_for_invocation = [ agent.tool_registry.registry[tool_name] for tool_name in agent_tools.keys() if tool_name in agent.tool_registry.registry ] # Prepare extra kwargs for observability and callbacks extra_kwargs = {} if hasattr(agent, "callback_handler") and agent.callback_handler: extra_kwargs["callback_handler"] = agent.callback_handler # Create fresh agent with same configuration but clean message history # Inherits: model, tools, trace_attributes, callback_handler fresh_agent = Agent( name=f"{agent.name}-invocation", model=agent.model, messages=[], # Empty message history (clean state) tools=tools_for_invocation, system_prompt=agent.system_prompt if hasattr(agent, "system_prompt") else None, trace_attributes=agent.trace_attributes if hasattr(agent, "trace_attributes") else {}, **extra_kwargs, ) # Call the fresh agent result = fresh_agent(prompt) # Extract text response from agent result response_text = str(result) logger.debug(f"Agent invocation complete, response length: {len(response_text)}") return [types.TextContent(type="text", text=response_text)]
  • Defines the MCP Tool schema for 'invoke_agent', including name, description, and input schema requiring a 'prompt' parameter.
    agent_invoke_tool = types.Tool( name="invoke_agent", description=( f"Invoke the full {agent.name} agent with a natural language prompt. " "Use this for complex queries that require reasoning across multiple tools " "or when you need a conversational response from the agent." ), inputSchema={ "type": "object", "properties": { "prompt": { "type": "string", "description": "The prompt or query to send to the agent", } }, "required": ["prompt"], }, ) mcp_tools.append(agent_invoke_tool)
  • Registers the list_tools handler that includes the 'invoke_agent' tool in the returned list of available tools when expose_agent is True.
    @server.list_tools() async def list_tools() -> list[types.Tool]: """Return list of available MCP tools. This handler is called when MCP clients request the available tools. It returns the pre-built list of MCP Tool objects converted from Strands agent tools. """ logger.debug(f"list_tools called, returning {len(mcp_tools)} tools") return mcp_tools
Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/cagataycali/strands-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server