Skip to main content
Glama
FlowLLM-AI

Finance MCP

by FlowLLM-AI

react_agent

Process financial queries using AI to deliver research and analysis, supporting stock/fund entity extraction and structured investigations.

Instructions

A React agent that answers user queries.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYesquery

Implementation Reference

  • Core handler function implementing the ReAct agent loop: initializes tools and messages, performs iterative reasoning (LLM chat) and acting (parallel tool execution) steps until no more tool calls or max_steps reached.
    async def async_execute(self):
        """
        Main execution loop implementing the ReAct (Reasoning + Acting) pattern.
    
        The agent alternates between:
        1. Reasoning: Invoking the LLM to analyze the situation and decide on actions
        2. Acting: Executing the chosen tools and incorporating their results
    
        This loop continues until:
        - The agent decides no more tools are needed (final answer reached)
        - The maximum number of steps is reached
        """
        from .think_tool_op import ThinkToolOp
    
        # Initialize think tool operator if needed
        think_op = ThinkToolOp(language=self.language)
    
        # Build dictionary of available tool operators from context
        tool_op_dict = await self.build_tool_op_dict()
    
        # Optionally add think_tool to available tools
        if self.add_think_tool:
            tool_op_dict["think_tool"] = think_op
    
        # Initialize conversation message history
        messages = await self.build_messages()
    
        # Main ReAct loop: alternate between reasoning and acting
        for step in range(self.max_steps):
            # Reasoning step: LLM analyzes context and decides on actions
            assistant_message, should_continue = await self._reasoning_step(
                messages,
                tool_op_dict,
                step,
            )
    
            # If no tool calls, the agent has reached a final answer
            if not should_continue:
                break
    
            # Acting step: execute tools and collect results
            tool_result_messages = await self._acting_step(
                assistant_message,
                tool_op_dict,
                think_op,
                step,
            )
    
            # Append tool results to message history for next reasoning step
            messages.extend(tool_result_messages)
    
        # Set final output and store full conversation history in metadata
        self.set_output(messages[-1].content)
        self.context.response.metadata["messages"] = messages
  • Defines the tool's input schema (required 'query' string) and description for MCP tool calling.
    def build_tool_call(self) -> ToolCall:
        """Expose metadata describing how to invoke the agent."""
        return ToolCall(
            **{
                "description": "A React agent that answers user queries.",
                "input_schema": {
                    "query": {
                        "type": "string",
                        "description": "query",
                        "required": True,
                    },
                },
            },
        )
  • @C.register_op() decorator registers ReactAgentOp as an available operation/tool in the FlowLLM context, making it discoverable as 'react_agent'.
    @C.register_op()
    class ReactAgentOp(BaseAsyncToolOp):
  • Helper to construct initial LLM conversation messages from input query or messages list, adding system prompt with timestamp.
    async def build_messages(self) -> List[Message]:
        """Build the initial message history for the LLM."""
        if "query" in self.input_dict and self.input_dict["query"]:
            query: str = self.input_dict["query"]
            now_time = datetime.datetime.now().strftime("%Y-%m-%d %H:%M:%S")
            messages = [
                Message(role=Role.SYSTEM, content=self.prompt_format(prompt_name="system_prompt", time=now_time)),
                Message(role=Role.USER, content=query),
            ]
            logger.info(f"round0.system={messages[0].model_dump_json()}")
            logger.info(f"round0.user={messages[1].model_dump_json()}")
    
        elif "messages" in self.input_dict:
            messages = self.input_dict["messages"]
            messages = [Message(**x) for x in messages]
    
            logger.info(f"round0.user={messages[-1].model_dump_json()}")
        else:
            raise ValueError("input_dict must contain either 'query' or 'messages'")
    
        return messages
  • Helper to build dictionary of available tool ops from context, keyed by tool_call.name, setting language.
    async def build_tool_op_dict(self) -> dict:
        """Collect available tool operators from the execution context."""
        assert isinstance(self.ops, BaseContext), "self.ops must be BaseContext"
        tool_op_dict: Dict[str, BaseAsyncToolOp] = {
            op.tool_call.name: op for op in self.ops.values() if isinstance(op, BaseAsyncToolOp)
        }
        for op in tool_op_dict.values():
            op.language = self.language
        return tool_op_dict
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. The description only states it 'answers user queries' without explaining how it works, what kind of answers it provides, whether it has limitations, rate limits, authentication needs, or what happens when invoked. This is completely inadequate for a tool with no annotation coverage.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise - a single sentence. However, this conciseness comes at the cost of being under-specified rather than efficient. While it's front-loaded with the core function, the sentence doesn't earn its place by providing necessary context or differentiation from sibling tools.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness1/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has no annotations, no output schema, and competes with multiple query-related sibling tools, the description is completely inadequate. It doesn't explain what makes this tool unique, how it behaves, what it returns, or when to use it. For a tool in this complex context, the description fails to provide essential information.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The input schema has 100% description coverage with one parameter 'query' described as 'query'. The description doesn't add any meaningful information about this parameter beyond what the schema already provides. Since schema coverage is high, the baseline score of 3 is appropriate - the description neither compensates nor adds value beyond the schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'A React agent that answers user queries' is tautological - it essentially restates the tool name 'react_agent' with minimal elaboration. While it mentions answering queries, it doesn't specify what type of queries, what domain it operates in, or how it differs from other query-answering tools like dashscope_search or tavily_search among the siblings. The purpose is vague rather than specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With sibling tools like dashscope_search and tavily_search that also handle queries, there's no indication of what makes this React agent unique or appropriate for certain contexts. No explicit or implied usage scenarios are mentioned.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/FlowLLM-AI/finance-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server