Skip to main content
Glama

react_agent

Process financial queries using AI to deliver research and analysis, supporting stock/fund entity extraction and structured investigations.

Instructions

A React agent that answers user queries.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesquery

Implementation Reference

  • Core handler function implementing the ReAct agent loop: initializes tools and messages, performs iterative reasoning (LLM chat) and acting (parallel tool execution) steps until no more tool calls or max_steps reached.
    async def async_execute(self): """ Main execution loop implementing the ReAct (Reasoning + Acting) pattern. The agent alternates between: 1. Reasoning: Invoking the LLM to analyze the situation and decide on actions 2. Acting: Executing the chosen tools and incorporating their results This loop continues until: - The agent decides no more tools are needed (final answer reached) - The maximum number of steps is reached """ from .think_tool_op import ThinkToolOp # Initialize think tool operator if needed think_op = ThinkToolOp(language=self.language) # Build dictionary of available tool operators from context tool_op_dict = await self.build_tool_op_dict() # Optionally add think_tool to available tools if self.add_think_tool: tool_op_dict["think_tool"] = think_op # Initialize conversation message history messages = await self.build_messages() # Main ReAct loop: alternate between reasoning and acting for step in range(self.max_steps): # Reasoning step: LLM analyzes context and decides on actions assistant_message, should_continue = await self._reasoning_step( messages, tool_op_dict, step, ) # If no tool calls, the agent has reached a final answer if not should_continue: break # Acting step: execute tools and collect results tool_result_messages = await self._acting_step( assistant_message, tool_op_dict, think_op, step, ) # Append tool results to message history for next reasoning step messages.extend(tool_result_messages) # Set final output and store full conversation history in metadata self.set_output(messages[-1].content) self.context.response.metadata["messages"] = messages
  • Defines the tool's input schema (required 'query' string) and description for MCP tool calling.
    def build_tool_call(self) -> ToolCall: """Expose metadata describing how to invoke the agent.""" return ToolCall( **{ "description": "A React agent that answers user queries.", "input_schema": { "query": { "type": "string", "description": "query", "required": True, }, }, }, )
  • @C.register_op() decorator registers ReactAgentOp as an available operation/tool in the FlowLLM context, making it discoverable as 'react_agent'.
    @C.register_op() class ReactAgentOp(BaseAsyncToolOp):
  • Helper to construct initial LLM conversation messages from input query or messages list, adding system prompt with timestamp.
    async def build_messages(self) -> List[Message]: """Build the initial message history for the LLM.""" if "query" in self.input_dict and self.input_dict["query"]: query: str = self.input_dict["query"] now_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S") messages = [ Message(role=Role.SYSTEM, content=self.prompt_format(prompt_name="system_prompt", time=now_time)), Message(role=Role.USER, content=query), ] logger.info(f"round0.system={messages[0].model_dump_json()}") logger.info(f"round0.user={messages[1].model_dump_json()}") elif "messages" in self.input_dict: messages = self.input_dict["messages"] messages = [Message(**x) for x in messages] logger.info(f"round0.user={messages[-1].model_dump_json()}") else: raise ValueError("input_dict must contain either 'query' or 'messages'") return messages
  • Helper to build dictionary of available tool ops from context, keyed by tool_call.name, setting language.
    async def build_tool_op_dict(self) -> dict: """Collect available tool operators from the execution context.""" assert isinstance(self.ops, BaseContext), "self.ops must be BaseContext" tool_op_dict: Dict[str, BaseAsyncToolOp] = { op.tool_call.name: op for op in self.ops.values() if isinstance(op, BaseAsyncToolOp) } for op in tool_op_dict.values(): op.language = self.language return tool_op_dict

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/FlowLLM-AI/finance-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server