Skip to main content
Glama
24mlight

A-Share MCP Server

get_performance_express_report

Generate performance reports for A-share stocks within specified date ranges to analyze financial data and track market indicators.

Instructions

Performance express report within date range.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
codeYes
start_dateYes
end_dateYes
limitNo
formatNomarkdown

Implementation Reference

  • The MCP tool handler function, decorated with @app.tool() for registration. It wraps the use case execution with standardized error handling, logging, and context.
    @app.tool()
    def get_performance_express_report(code: str, start_date: str, end_date: str, limit: int = 250, format: str = "markdown") -> str:
        """Performance express report within date range."""
        return run_tool_with_handling(
            lambda: fetch_performance_express_report(
                active_data_source, code=code, start_date=start_date, end_date=end_date, limit=limit, format=format
            ),
            context=f"get_performance_express_report:{code}:{start_date}-{end_date}",
        )
  • mcp_server.py:52-52 (registration)
    Invocation of the registration function that defines and registers the financial report tools, including get_performance_express_report, to the FastMCP app instance.
    register_financial_report_tools(app, active_data_source)
  • Use case function that fetches raw data from the data source interface and formats it into markdown or other output for the tool.
    def fetch_performance_express_report(data_source: FinancialDataSource, *, code: str, start_date: str, end_date: str, limit: int, format: str) -> str:
        validate_output_format(format)
        df = data_source.get_performance_express_report(code=code, start_date=start_date, end_date=end_date)
        meta = {"code": code, "start_date": start_date, "end_date": end_date, "dataset": "Performance Express"}
        return format_table_output(df, format=format, max_rows=limit, meta=meta)
  • Concrete implementation in BaostockDataSource that queries the Baostock API for performance express reports and returns a pandas DataFrame.
    def get_performance_express_report(self, code: str, start_date: str, end_date: str) -> pd.DataFrame:
        """Fetches performance express reports (业绩快报) using Baostock."""
        logger.info(
            f"Fetching Performance Express Report for {code} ({start_date} to {end_date})")
        try:
            with baostock_login_context():
                rs = bs.query_performance_express_report(
                    code=code, start_date=start_date, end_date=end_date)
    
                if rs.error_code != '0':
                    logger.error(
                        f"Baostock API error (Perf Express) for {code}: {rs.error_msg} (code: {rs.error_code})")
                    if "no record found" in rs.error_msg.lower() or rs.error_code == '10002':
                        raise NoDataFoundError(
                            f"No performance express report found for {code} in range {start_date}-{end_date}. Baostock msg: {rs.error_msg}")
                    else:
                        raise DataSourceError(
                            f"Baostock API error fetching performance express report: {rs.error_msg} (code: {rs.error_code})")
    
                data_list = []
                while rs.next():
                    data_list.append(rs.get_row_data())
    
                if not data_list:
                    logger.warning(
                        f"No performance express report found for {code} in range {start_date}-{end_date} (empty result set).")
                    raise NoDataFoundError(
                        f"No performance express report found for {code} in range {start_date}-{end_date} (empty result set).")
    
                result_df = pd.DataFrame(data_list, columns=rs.fields)
                logger.info(
                    f"Retrieved {len(result_df)} performance express report records for {code}.")
                return result_df
  • Abstract method in the FinancialDataSource interface defining the expected signature for fetching performance express report data.
    @abstractmethod
    def get_performance_express_report(self, code: str, start_date: str, end_date: str) -> pd.DataFrame:
        pass
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden but fails to disclose behavioral traits. It doesn't indicate whether this is a read-only operation, if it has rate limits, authentication needs, or what the output looks like (e.g., report format, data structure). The description is too minimal to guide safe or effective use.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise with a single sentence, 'Performance express report within date range.' It's front-loaded and wastes no words, though this brevity contributes to underspecification in other dimensions.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given no annotations, 0% schema coverage, no output schema, and 5 parameters (3 required), the description is incomplete. It lacks details on what the report contains, how to interpret parameters like 'code', and behavioral aspects, making it inadequate for a tool with this complexity.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate but adds no parameter semantics. It mentions 'date range' which hints at start_date and end_date, but doesn't explain the 'code' parameter (e.g., stock code, index code), 'limit', or 'format' options. This leaves key parameters undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose3/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description states 'Performance express report within date range' which indicates the tool retrieves a performance report filtered by date. However, it's vague about what 'performance express report' specifically entails (e.g., stock performance, financial metrics) and doesn't distinguish it from sibling tools like get_stock_analysis or get_forecast_report that might also provide performance-related data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives. The description doesn't mention prerequisites, exclusions, or compare it to siblings such as get_stock_analysis or get_historical_k_data, leaving the agent to guess based on tool names alone.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/24mlight/a-share-mcp-is-just-i-need'

If you have feedback or need assistance with the MCP directory API, please join our Discord server