Skip to main content
Glama
maxscheijen

MCP Yahoo Finance

by maxscheijen

get_earning_dates

Retrieve upcoming and historical earnings dates for a specific stock symbol directly from Yahoo Finance. Specify the stock symbol and adjust the limit to control the number of dates returned.

Instructions

Get earning dates.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
limitNomax amount of upcoming and recent earnings dates to return. Default value 12 should return next 4 quarters and last 8 quarters. Increase if more history is needed.
symbolYesStock symbol in Yahoo Finance format.

Implementation Reference

  • Core handler function implementing the get_earning_dates tool. Fetches earnings dates using yfinance Ticker.get_earnings_dates, processes index to string dates if DataFrame, and returns JSON string.
    def get_earning_dates(self, symbol: str, limit: int = 12) -> str:
        """Get earning dates.
    
    
        Args:
            symbol (str): Stock symbol in Yahoo Finance format.
            limit (int): max amount of upcoming and recent earnings dates to return. Default value 12 should return next 4 quarters and last 8 quarters. Increase if more history is needed.
        """
    
        stock = Ticker(ticker=symbol, session=self.session)
        earning_dates = stock.get_earnings_dates(limit=limit)
    
        if isinstance(earning_dates, pd.DataFrame):
            earning_dates.index = earning_dates.index.date.astype(str)  # type: ignore
            return f"{earning_dates.to_json(indent=2)}"
        return f"{earning_dates}"
  • Registration of the get_earning_dates tool in the list_tools() handler using generate_tool to create the Tool object.
    generate_tool(yf.get_earning_dates),
  • Dispatch logic in call_tool() handler that invokes the get_earning_dates method with parsed arguments and returns the result as TextContent.
    case "get_earning_dates":
        price = yf.get_earning_dates(**args)
        return [TextContent(type="text", text=price)]
  • Helper function that generates the tool schema dynamically from the handler function's signature, type annotations, and docstring parameters.
    def generate_tool(func: Any) -> Tool:
        """Generates a tool schema from a Python function."""
        signature = inspect.signature(func)
        docstring = inspect.getdoc(func) or ""
        param_descriptions = parse_docstring(docstring)
    
        schema = {
            "name": func.__name__,
            "description": docstring.split("Args:")[0].strip(),
            "inputSchema": {
                "type": "object",
                "properties": {},
            },
        }
    
        for param_name, param in signature.parameters.items():
            param_type = (
                "number"
                if param.annotation is float
                else "string"
                if param.annotation is str
                else "string"
            )
            schema["inputSchema"]["properties"][param_name] = {
                "type": param_type,
                "description": param_descriptions.get(param_name, ""),
            }
    
            if "required" not in schema["inputSchema"]:
                schema["inputSchema"]["required"] = [param_name]
            else:
                if "=" not in str(param):
                    schema["inputSchema"]["required"].append(param_name)
    
        return Tool(**schema)
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure but offers minimal information. It implies a read-only operation ('Get'), but doesn't address critical aspects like rate limits, authentication needs, error handling, or what the return format looks like (e.g., structured data vs. raw text). For a tool with no annotation coverage, this leaves significant gaps in understanding how the tool behaves in practice.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise at just three words, with zero wasted language. It's front-loaded with the core purpose, though this brevity comes at the cost of completeness. Every word earns its place by directly stating the tool's function, making it efficient despite being under-specified.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of financial data tools and the lack of both annotations and an output schema, the description is incomplete. It doesn't explain what earning dates are, how they're structured, or what the tool returns, forcing the agent to infer from context. While the schema covers parameters well, the overall context for effective tool use is insufficient, especially compared to sibling tools that might overlap in purpose.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no parameter-specific information beyond what's already in the schema, which has 100% coverage with detailed descriptions for both 'limit' and 'symbol'. Since the schema fully documents the parameters, the baseline score is 3. The description doesn't compensate with additional context like examples or edge cases, but it also doesn't contradict the schema, so it meets the minimum viable standard.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose2/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description 'Get earning dates.' is a tautology that essentially restates the tool name without adding meaningful context. It specifies the verb 'Get' and resource 'earning dates', but doesn't clarify what earning dates are (e.g., earnings announcement dates for stocks) or how they differ from other financial data tools. While it's clear this retrieves earning dates, it lacks the specificity needed to distinguish it from sibling tools that also retrieve financial data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines1/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives. It doesn't mention sibling tools like get_historical_stock_prices or get_news, nor does it explain why one would choose earning dates over other financial metrics. There's no context about use cases, prerequisites, or exclusions, leaving the agent with no basis for tool selection beyond the name itself.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/maxscheijen/mcp-yahoo-finance'

If you have feedback or need assistance with the MCP directory API, please join our Discord server