Skip to main content
Glama
24mlight

A-Share MCP Server

get_historical_k_data

Fetch historical K-line (OHLCV) data for Chinese A-share stocks to analyze price movements and trading volumes over specified time periods and frequencies.

Instructions

    Fetches historical K-line (OHLCV) data for a Chinese A-share stock.

    Args:
        code: The stock code in Baostock format (e.g., 'sh.600000', 'sz.000001').
        start_date: Start date in 'YYYY-MM-DD' format.
        end_date: End date in 'YYYY-MM-DD' format.
        frequency: Data frequency. Valid options (from Baostock):
                     'd': daily
                     'w': weekly
                     'm': monthly
                     '5': 5 minutes
                     '15': 15 minutes
                     '30': 30 minutes
                     '60': 60 minutes
                   Defaults to 'd'.
        adjust_flag: Adjustment flag for price/volume. Valid options (from Baostock):
                       '1': Forward adjusted (后复权)
                       '2': Backward adjusted (前复权)
                       '3': Non-adjusted (不复权)
                     Defaults to '3'.
        fields: Optional list of specific data fields to retrieve (must be valid Baostock fields).
                If None or empty, default fields will be used (e.g., date, code, open, high, low, close, volume, amount, pctChg).
        limit: Max rows to return. Defaults to 250.
        format: Output format: 'markdown' | 'json' | 'csv'. Defaults to 'markdown'.

        Returns:
            A Markdown formatted string containing the K-line data table, or an error message.
            The table might be truncated if the result set is too large.
        

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYes
start_dateYes
end_dateYes
frequencyNod
adjust_flagNo3
fieldsNo
limitNo
formatNomarkdown

Implementation Reference

  • MCP tool handler for get_historical_k_data. Decorated with @app.tool(), handles input, logs call, delegates to use case via run_tool_with_handling for execution and error handling.
    @app.tool()
    def get_historical_k_data(
        code: str,
        start_date: str,
        end_date: str,
        frequency: str = "d",
        adjust_flag: str = "3",
        fields: Optional[List[str]] = None,
        limit: int = 250,
        format: str = "markdown",
    ) -> str:
        """
        Fetches historical K-line (OHLCV) data for a Chinese A-share stock.
    
        Args:
            code: The stock code in Baostock format (e.g., 'sh.600000', 'sz.000001').
            start_date: Start date in 'YYYY-MM-DD' format.
            end_date: End date in 'YYYY-MM-DD' format.
            frequency: Data frequency. Valid options (from Baostock):
                         'd': daily
                         'w': weekly
                         'm': monthly
                         '5': 5 minutes
                         '15': 15 minutes
                         '30': 30 minutes
                         '60': 60 minutes
                       Defaults to 'd'.
            adjust_flag: Adjustment flag for price/volume. Valid options (from Baostock):
                           '1': Forward adjusted (后复权)
                           '2': Backward adjusted (前复权)
                           '3': Non-adjusted (不复权)
                         Defaults to '3'.
            fields: Optional list of specific data fields to retrieve (must be valid Baostock fields).
                    If None or empty, default fields will be used (e.g., date, code, open, high, low, close, volume, amount, pctChg).
            limit: Max rows to return. Defaults to 250.
            format: Output format: 'markdown' | 'json' | 'csv'. Defaults to 'markdown'.
    
            Returns:
                A Markdown formatted string containing the K-line data table, or an error message.
                The table might be truncated if the result set is too large.
            """
        logger.info(
            f"Tool 'get_historical_k_data' called for {code} ({start_date}-{end_date}, freq={frequency}, adj={adjust_flag}, fields={fields})"
        )
        return run_tool_with_handling(
            lambda: fetch_historical_k_data(
                active_data_source,
                code=code,
                start_date=start_date,
                end_date=end_date,
                frequency=frequency,
                adjust_flag=adjust_flag,
                fields=fields,
                limit=limit,
                format=format,
            ),
            context=f"get_historical_k_data:{code}",
        )
  • mcp_server.py:51-51 (registration)
    Invocation of register_stock_market_tools which registers the get_historical_k_data tool (and others) to the FastMCP app instance.
    register_stock_market_tools(app, active_data_source)
  • Abstract method definition in FinancialDataSource interface, specifying the contract (schema) for get_historical_k_data including params, types, docstring with validation rules.
    def get_historical_k_data(
        self,
        code: str,
        start_date: str,
        end_date: str,
        frequency: str = "d",
        adjust_flag: str = "3",
        fields: Optional[List[str]] = None,
    ) -> pd.DataFrame:
        """
        Fetches historical K-line (OHLCV) data for a given stock code.
    
        Args:
            code: The stock code (e.g., 'sh.600000', 'sz.000001').
            start_date: Start date in 'YYYY-MM-DD' format.
            end_date: End date in 'YYYY-MM-DD' format.
            frequency: Data frequency. Common values depend on the underlying
                       source (e.g., 'd' for daily, 'w' for weekly, 'm' for monthly,
                       '5', '15', '30', '60' for minutes). Defaults to 'd'.
            adjust_flag: Adjustment flag for historical data. Common values
                         depend on the source (e.g., '1' for forward adjusted,
                         '2' for backward adjusted, '3' for non-adjusted).
                         Defaults to '3'.
            fields: Optional list of specific fields to retrieve. If None,
                    retrieves default fields defined by the implementation.
    
        Returns:
            A pandas DataFrame containing the historical K-line data, with
            columns corresponding to the requested fields.
    
        Raises:
            LoginError: If login to the data source fails.
            NoDataFoundError: If no data is found for the query.
            DataSourceError: For other data source related errors.
            ValueError: If input parameters are invalid.
        """
        pass
  • Use case function called by the tool handler; performs validation and formats the DataFrame output from data source.
    def fetch_historical_k_data(
        data_source: FinancialDataSource,
        *,
        code: str,
        start_date: str,
        end_date: str,
        frequency: str = "d",
        adjust_flag: str = "3",
        fields: Optional[List[str]] = None,
        limit: int = 250,
        format: str = "markdown",
    ) -> str:
        validate_frequency(frequency)
        validate_adjust_flag(adjust_flag)
        validate_output_format(format)
    
        df = data_source.get_historical_k_data(
            code=code,
            start_date=start_date,
            end_date=end_date,
            frequency=frequency,
            adjust_flag=adjust_flag,
            fields=fields,
        )
        meta = {
            "code": code,
            "start_date": start_date,
            "end_date": end_date,
            "frequency": frequency,
            "adjust_flag": adjust_flag,
        }
        return format_table_output(df, format=format, max_rows=limit, meta=meta)
  • Core data fetching implementation in BaostockDataSource; queries Baostock API with error handling, returns pandas DataFrame.
    def get_historical_k_data(
        self,
        code: str,
        start_date: str,
        end_date: str,
        frequency: str = "d",
        adjust_flag: str = "3",
        fields: Optional[List[str]] = None,
    ) -> pd.DataFrame:
        """Fetches historical K-line data using Baostock."""
        logger.info(
            f"Fetching K-data for {code} ({start_date} to {end_date}), freq={frequency}, adjust={adjust_flag}")
        try:
            formatted_fields = self._format_fields(fields, DEFAULT_K_FIELDS)
            logger.debug(
                f"Requesting fields from Baostock: {formatted_fields}")
    
            with baostock_login_context():
                rs = bs.query_history_k_data_plus(
                    code,
                    formatted_fields,
                    start_date=start_date,
                    end_date=end_date,
                    frequency=frequency,
                    adjustflag=adjust_flag
                )
    
                if rs.error_code != '0':
                    logger.error(
                        f"Baostock API error (K-data) for {code}: {rs.error_msg} (code: {rs.error_code})")
                    # Check common error codes, e.g., for no data
                    if "no record found" in rs.error_msg.lower() or rs.error_code == '10002':  # Example error code
                        raise NoDataFoundError(
                            f"No historical data found for {code} in the specified range. Baostock msg: {rs.error_msg}")
                    else:
                        raise DataSourceError(
                            f"Baostock API error fetching K-data: {rs.error_msg} (code: {rs.error_code})")
    
                data_list = []
                while rs.next():
                    data_list.append(rs.get_row_data())
    
                if not data_list:
                    logger.warning(
                        f"No historical data found for {code} in range (empty result set from Baostock).")
                    raise NoDataFoundError(
                        f"No historical data found for {code} in the specified range (empty result set).")
    
                # Crucial: Use rs.fields for column names
                result_df = pd.DataFrame(data_list, columns=rs.fields)
                logger.info(f"Retrieved {len(result_df)} records for {code}.")
                return result_df
    
        except (LoginError, NoDataFoundError, DataSourceError, ValueError) as e:
            # Re-raise known errors
            logger.warning(
                f"Caught known error fetching K-data for {code}: {type(e).__name__}")
            raise e
        except Exception as e:
            # Wrap unexpected errors
            # Use logger.exception to include traceback
            logger.exception(
                f"Unexpected error fetching K-data for {code}: {e}")
            raise DataSourceError(
                f"Unexpected error fetching K-data for {code}: {e}")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It discloses some behavioral traits: it specifies the data source (Baostock), mentions output truncation for large results, and describes the return format. However, it lacks details on rate limits, authentication needs, or error handling, which are important for a data-fetching tool.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Args, Returns) and uses bullet-like formatting for parameter details. It's appropriately sized for the complexity, though some sentences could be more concise (e.g., the Returns section is slightly verbose).

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (8 parameters, no annotations, no output schema), the description is moderately complete. It covers parameters well and hints at output behavior, but lacks details on errors, data freshness, or integration context, leaving gaps for an agent to use it effectively.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed semantics for all 8 parameters. It explains each parameter's purpose, valid values (with examples for enums like frequency and adjust_flag), defaults, and optionality, adding significant value beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool 'fetches historical K-line (OHLCV) data for a Chinese A-share stock,' which is a specific verb+resource combination. However, it doesn't explicitly distinguish this tool from its many siblings (e.g., get_stock_basic_info, get_dividend_data), which all fetch different types of stock data, so it doesn't reach the highest score.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. With many sibling tools for stock data (e.g., get_balance_data, get_profit_data), there's no mention of use cases, prerequisites, or comparisons to help an agent choose appropriately.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/24mlight/a-share-mcp-is-just-i-need'

If you have feedback or need assistance with the MCP directory API, please join our Discord server