Skip to main content
Glama
FlowLLM-AI

Finance MCP

by FlowLLM-AI

history_calculate

Analyze historical A-share stock performance by querying price data to calculate trends, indicators, and technical patterns without coding.

Instructions

对于给定的A股股票代码(其他市场的股票请不要使用此工具),有现成的历史股价数据,其数据结构如下:

名称	类型	描述
ts_code	str	股票代码
trade_date	str	交易日期
open	float	开盘价
high	float	最高价
low	float	最低价
close	float	收盘价
pre_close	float	昨收价
change	float	涨跌额
pct_chg	float	涨跌幅
vol	float	成交量 (手)
amount	float	成交额 (千元)

你需要输入你想分析的股票代码以及你的问题。该工具将为你生成并执行相应的代码,并返回结果。 注意:

  1. 你无需编写任何代码——只需直接提问即可,例如:“过去一周涨了多少,有没有出现顶背离?”、“近期市场趋势如何?”、“MACD是否形成了金叉?”。

  2. 该工具只能基于上述数据结构中的数据回答问题,请勿提出需要超出该数据范围信息的问题。

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYesA-share stock code (e.g. '600000' or '000001').
queryYesUser question about the stock's historical performance.

Implementation Reference

  • The async_execute method implements the core handler logic for the 'history_calculate' tool: normalizes stock code, prompts LLM to generate Python code for Tushare historical data analysis, executes the code, and sets the output.
    async def async_execute(self):
        """Generate and execute analysis code for the given stock code.
    
        The method normalizes the stock code to the Tushare format, calls
        the LLM to generate Python analysis code, and finally executes that
        code using ``exec_code``.
        """
    
        code: str = self.input_dict["code"]
        # Normalize plain numeric codes into exchange-qualified codes.
        # Examples: '00'/'30' → 'SZ', '60'/'68' → 'SH', '92' → 'BJ'.
        if code[:2] in ["00", "30"]:
            code = f"{code}.SZ"
        elif code[:2] in ["60", "68"]:
            code = f"{code}.SH"
        elif code[:2] in ["92"]:
            code = f"{code}.BJ"
    
        query: str = self.input_dict["query"]
    
        import tushare as ts
    
        # Initialize the Tushare pro API using the token from environment.
        ts.set_token(token=os.getenv("TUSHARE_API_TOKEN", ""))
    
        code_prompt: str = self.prompt_format(
            prompt_name="code_prompt",
            code=code,
            query=query,
            current_date=get_datetime(),
            example=self.get_prompt("code_example"),
        )
        logger.info(f"code_prompt=\n{code_prompt}")
    
        messages = [Message(role=Role.USER, content=code_prompt)]
    
        def get_code(message: Message):
            """Extract Python code from the assistant response."""
    
            return extract_content(message.content, language_tag="python")
    
        result_code = await self.llm.achat(messages=messages, callback_fn=get_code)
        logger.info(f"result_code=\n{result_code}")
    
        # Execute the generated Python code and set the execution result.
        self.set_output(exec_code(result_code))
    
    async def async_default_execute(self, e: Exception = None, **_kwargs):
  • The build_tool_call method defines the input schema for the tool: 'code' (string, required, stock code) and 'query' (string, required, natural language question).
    return ToolCall(
        **{
            "description": self.get_prompt("tool_description"),
            "input_schema": {
                "code": {
                    "type": "string",
                    "description": "A-share stock code (e.g. '600000' or '000001').",
                    "required": True,
                },
                "query": {
                    "type": "string",
                    "description": "User question about the stock's historical performance.",
                    "required": True,
                },
            },
        },
    )
  • The @C.register_op() decorator registers the HistoryCalculateOp class as an MCP tool, which is invoked by the name 'history_calculate'.
    @C.register_op()
    class HistoryCalculateOp(BaseAsyncToolOp):

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/FlowLLM-AI/finance-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server