Skip to main content
Glama
massive-com

Polygon-io MCP Server

Official

get_daily_open_close_agg

Retrieve daily open, close, high, and low prices for a specific ticker and date using Polygon-io MCP Server. Ideal for analyzing stock data trends over time.

Instructions

Get daily open, close, high, and low for a specific ticker and date.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
adjustedNo
dateYes
paramsNo
tickerYes

Implementation Reference

  • The handler function implementing the get_daily_open_close_agg MCP tool. It fetches daily OHLC data via the massive_client API and returns it formatted as CSV.
    @poly_mcp.tool(annotations=ToolAnnotations(readOnlyHint=True))
    async def get_daily_open_close_agg(
        ticker: str,
        date: str,
        adjusted: Optional[bool] = None,
        params: Optional[Dict[str, Any]] = None,
    ) -> str:
        """
        Get daily open, close, high, and low for a specific ticker and date.
        """
        try:
            results = massive_client.get_daily_open_close_agg(
                ticker=ticker, date=date, adjusted=adjusted, params=params, raw=True
            )
    
            return json_to_csv(results.data.decode("utf-8"))
        except Exception as e:
            return f"Error: {e}"
  • Helper function used by the tool to convert JSON API responses to CSV format for output.
    def json_to_csv(json_input: str | dict) -> str:
        """
        Convert JSON to flattened CSV format.
    
        Args:
            json_input: JSON string or dict. If the JSON has a 'results' key containing
                       a list, it will be extracted. Otherwise, the entire structure
                       will be wrapped in a list for processing.
    
        Returns:
            CSV string with headers and flattened rows
        """
        # Parse JSON if it's a string
        if isinstance(json_input, str):
            try:
                data = json.loads(json_input)
            except json.JSONDecodeError:
                # If JSON parsing fails, return empty CSV
                return ""
        else:
            data = json_input
    
        if isinstance(data, dict) and "results" in data:
            results_value = data["results"]
            # Handle both list and single object responses
            if isinstance(results_value, list):
                records = results_value
            elif isinstance(results_value, dict):
                # Single object response (e.g., get_last_trade returns results as object)
                records = [results_value]
            else:
                records = [results_value]
        elif isinstance(data, dict) and "last" in data:
            # Handle responses with "last" key (e.g., get_last_trade, get_last_quote)
            records = [data["last"]] if isinstance(data["last"], dict) else [data]
        elif isinstance(data, list):
            records = data
        else:
            records = [data]
    
        # Only flatten dict records, skip non-dict items
        flattened_records = []
        for record in records:
            if isinstance(record, dict):
                flattened_records.append(_flatten_dict(record))
            else:
                # If it's not a dict, wrap it in a dict with a 'value' key
                flattened_records.append({"value": str(record)})
    
        if not flattened_records:
            return ""
    
        # Get all unique keys across all records (for consistent column ordering)
        all_keys = []
        seen = set()
        for record in flattened_records:
            if isinstance(record, dict):
                for key in record.keys():
                    if key not in seen:
                        all_keys.append(key)
                        seen.add(key)
    
        output = io.StringIO()
        writer = csv.DictWriter(output, fieldnames=all_keys, lineterminator="\n")
        writer.writeheader()
        writer.writerows(flattened_records)
    
        return output.getvalue()
    
    
    def _flatten_dict(
        d: dict[str, Any], parent_key: str = "", sep: str = "_"
    ) -> dict[str, Any]:
        """
        Flatten a nested dictionary by joining keys with separator.
    
        Args:
            d: Dictionary to flatten
            parent_key: Key from parent level (for recursion)
            sep: Separator to use between nested keys
    
        Returns:
            Flattened dictionary with no nested structures
        """
        items = []
        for k, v in d.items():
            new_key = f"{parent_key}{sep}{k}" if parent_key else k
    
            if isinstance(v, dict):
                # Recursively flatten nested dicts
                items.extend(_flatten_dict(v, new_key, sep=sep).items())
            elif isinstance(v, list):
                # Convert lists to comma-separated strings
                items.append((new_key, str(v)))
            else:
                items.append((new_key, v))
    
        return dict(items)
Behavior1/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden of behavioral disclosure. It only states what data is retrieved without mentioning critical behaviors like rate limits, authentication needs, data freshness (e.g., real-time vs. delayed), error handling, or whether it's a read-only operation. This leaves significant gaps for an AI agent to understand how to invoke it safely and effectively.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single, efficient sentence that front-loads the core functionality without any wasted words. It directly states the action and key inputs, making it easy to parse quickly.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity (financial data tool with 4 parameters), lack of annotations, 0% schema coverage, and no output schema, the description is incomplete. It doesn't cover parameter details like 'adjusted' or 'params', behavioral aspects like data sources or limitations, or output format (e.g., JSON structure). This leaves the AI agent with insufficient information for reliable tool selection and invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It mentions 'ticker and date' which maps to two of the four parameters (ticker, date), but ignores 'adjusted' and 'params'. For 'adjusted', it doesn't explain what adjustment means (e.g., for splits/dividends) or its default behavior. For 'params', it provides no context on its purpose or usage, leaving these parameters undocumented.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the verb 'Get' and specifies the resource 'daily open, close, high, and low' along with the required inputs 'ticker and date'. It distinguishes itself from siblings like get_previous_close_agg by focusing on a specific date's OHLC data rather than previous close. However, it doesn't explicitly differentiate from get_aggs or list_aggs which might offer similar aggregation data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus alternatives like get_aggs, get_previous_close_agg, or get_snapshot_ticker. It lacks context about prerequisites, such as market hours or data availability, and doesn't mention any exclusions or specific use cases.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/massive-com/mcp_massive'

If you have feedback or need assistance with the MCP directory API, please join our Discord server