Skip to main content
Glama
FlowLLM-AI

Finance MCP

by FlowLLM-AI

history_calculate

Analyze historical A-share stock performance by querying price data to calculate trends, indicators, and technical patterns without coding.

Instructions

对于给定的A股股票代码(其他市场的股票请不要使用此工具),有现成的历史股价数据,其数据结构如下:

名称	类型	描述
ts_code	str	股票代码
trade_date	str	交易日期
open	float	开盘价
high	float	最高价
low	float	最低价
close	float	收盘价
pre_close	float	昨收价
change	float	涨跌额
pct_chg	float	涨跌幅
vol	float	成交量 (手)
amount	float	成交额 (千元)

你需要输入你想分析的股票代码以及你的问题。该工具将为你生成并执行相应的代码,并返回结果。 注意:

  1. 你无需编写任何代码——只需直接提问即可,例如:“过去一周涨了多少,有没有出现顶背离?”、“近期市场趋势如何?”、“MACD是否形成了金叉?”。

  2. 该工具只能基于上述数据结构中的数据回答问题,请勿提出需要超出该数据范围信息的问题。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYesA-share stock code (e.g. '600000' or '000001').
queryYesUser question about the stock's historical performance.

Implementation Reference

  • The async_execute method implements the core handler logic for the 'history_calculate' tool: normalizes stock code, prompts LLM to generate Python code for Tushare historical data analysis, executes the code, and sets the output.
    async def async_execute(self):
        """Generate and execute analysis code for the given stock code.
    
        The method normalizes the stock code to the Tushare format, calls
        the LLM to generate Python analysis code, and finally executes that
        code using ``exec_code``.
        """
    
        code: str = self.input_dict["code"]
        # Normalize plain numeric codes into exchange-qualified codes.
        # Examples: '00'/'30' → 'SZ', '60'/'68' → 'SH', '92' → 'BJ'.
        if code[:2] in ["00", "30"]:
            code = f"{code}.SZ"
        elif code[:2] in ["60", "68"]:
            code = f"{code}.SH"
        elif code[:2] in ["92"]:
            code = f"{code}.BJ"
    
        query: str = self.input_dict["query"]
    
        import tushare as ts
    
        # Initialize the Tushare pro API using the token from environment.
        ts.set_token(token=os.getenv("TUSHARE_API_TOKEN", ""))
    
        code_prompt: str = self.prompt_format(
            prompt_name="code_prompt",
            code=code,
            query=query,
            current_date=get_datetime(),
            example=self.get_prompt("code_example"),
        )
        logger.info(f"code_prompt=\n{code_prompt}")
    
        messages = [Message(role=Role.USER, content=code_prompt)]
    
        def get_code(message: Message):
            """Extract Python code from the assistant response."""
    
            return extract_content(message.content, language_tag="python")
    
        result_code = await self.llm.achat(messages=messages, callback_fn=get_code)
        logger.info(f"result_code=\n{result_code}")
    
        # Execute the generated Python code and set the execution result.
        self.set_output(exec_code(result_code))
    
    async def async_default_execute(self, e: Exception = None, **_kwargs):
  • The build_tool_call method defines the input schema for the tool: 'code' (string, required, stock code) and 'query' (string, required, natural language question).
    return ToolCall(
        **{
            "description": self.get_prompt("tool_description"),
            "input_schema": {
                "code": {
                    "type": "string",
                    "description": "A-share stock code (e.g. '600000' or '000001').",
                    "required": True,
                },
                "query": {
                    "type": "string",
                    "description": "User question about the stock's historical performance.",
                    "required": True,
                },
            },
        },
    )
  • The @C.register_op() decorator registers the HistoryCalculateOp class as an MCP tool, which is invoked by the name 'history_calculate'.
    @C.register_op()
    class HistoryCalculateOp(BaseAsyncToolOp):
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses key behavioral traits: it generates and executes code automatically (users don't need to write code), it's limited to the given data structure, and it returns results. However, it lacks details on potential limitations like rate limits, error handling, or authentication needs. No contradiction with annotations exists since none are provided.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is appropriately sized and front-loaded, starting with the purpose and data structure, followed by usage notes. It uses bullet points for clarity. However, some parts could be more concise (e.g., the data table might be verbose for a description), and the second note repeats information about not writing code.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, no output schema, and a tool that generates/executes code (implying complexity), the description is moderately complete. It covers purpose, data constraints, and usage guidelines, but lacks details on output format, error cases, or performance characteristics. For a code-execution tool, more behavioral context would be helpful.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents both parameters ('code' and 'query') adequately. The description adds some context by specifying 'A-share stock code' and giving query examples, but doesn't provide significant additional meaning beyond what's in the schema (e.g., no format details for 'code' beyond 'A-share', no constraints on 'query' length). Baseline 3 is appropriate when schema does the heavy lifting.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool analyzes historical A-share stock price data based on a specific data structure and generates/executes code to answer questions about stock performance. It specifies the resource (A-share stocks) and verb (analyze historical data), though it doesn't explicitly distinguish from sibling tools like 'execute_code' or 'dashscope_search' that might also process queries.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for A-share stocks only (not other markets), based on the specified data structure, and for questions about historical performance. It includes explicit 'do not' guidance (e.g., 'other markets...请不要使用此工具', '请勿提出需要超出该数据范围信息的问题'). However, it doesn't explicitly mention when to use alternatives among the sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/FlowLLM-AI/finance-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server