Skip to main content
Glama
massive-com

Polygon-io MCP Server

Official

get_aggs

Retrieve aggregate bars for a specific ticker within a defined date range and custom time window using specified parameters. Ideal for analyzing financial data trends over time.

Instructions

List aggregate bars for a ticker over a given date range in custom time window sizes.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
adjustedNo
from_Yes
limitNo
multiplierYes
paramsNo
sortNo
tickerYes
timespanYes
toYes

Implementation Reference

  • The get_aggs tool handler: decorated with @poly_mcp.tool for registration, defines input parameters (serving as schema), calls massive_client.get_aggs API, formats response to CSV via json_to_csv helper, handles errors.
    @poly_mcp.tool(annotations=ToolAnnotations(readOnlyHint=True))
    async def get_aggs(
        ticker: str,
        multiplier: int,
        timespan: str,
        from_: Union[str, int, datetime, date],
        to: Union[str, int, datetime, date],
        adjusted: Optional[bool] = None,
        sort: Optional[str] = None,
        limit: Optional[int] = 10,
        params: Optional[Dict[str, Any]] = None,
    ) -> str:
        """
        List aggregate bars for a ticker over a given date range in custom time window sizes.
        """
        try:
            results = massive_client.get_aggs(
                ticker=ticker,
                multiplier=multiplier,
                timespan=timespan,
                from_=from_,
                to=to,
                adjusted=adjusted,
                sort=sort,
                limit=limit,
                params=params,
                raw=True,
            )
    
            # Parse the binary data to string and then to JSON
            return json_to_csv(results.data.decode("utf-8"))
        except Exception as e:
            return f"Error: {e}"
  • json_to_csv helper function imported and used in get_aggs to convert API JSON response to flattened CSV string for tool output. Includes _flatten_dict sub-helper.
    def json_to_csv(json_input: str | dict) -> str:
        """
        Convert JSON to flattened CSV format.
    
        Args:
            json_input: JSON string or dict. If the JSON has a 'results' key containing
                       a list, it will be extracted. Otherwise, the entire structure
                       will be wrapped in a list for processing.
    
        Returns:
            CSV string with headers and flattened rows
        """
        # Parse JSON if it's a string
        if isinstance(json_input, str):
            try:
                data = json.loads(json_input)
            except json.JSONDecodeError:
                # If JSON parsing fails, return empty CSV
                return ""
        else:
            data = json_input
    
        if isinstance(data, dict) and "results" in data:
            results_value = data["results"]
            # Handle both list and single object responses
            if isinstance(results_value, list):
                records = results_value
            elif isinstance(results_value, dict):
                # Single object response (e.g., get_last_trade returns results as object)
                records = [results_value]
            else:
                records = [results_value]
        elif isinstance(data, dict) and "last" in data:
            # Handle responses with "last" key (e.g., get_last_trade, get_last_quote)
            records = [data["last"]] if isinstance(data["last"], dict) else [data]
        elif isinstance(data, list):
            records = data
        else:
            records = [data]
    
        # Only flatten dict records, skip non-dict items
        flattened_records = []
        for record in records:
            if isinstance(record, dict):
                flattened_records.append(_flatten_dict(record))
            else:
                # If it's not a dict, wrap it in a dict with a 'value' key
                flattened_records.append({"value": str(record)})
    
        if not flattened_records:
            return ""
    
        # Get all unique keys across all records (for consistent column ordering)
        all_keys = []
        seen = set()
        for record in flattened_records:
            if isinstance(record, dict):
                for key in record.keys():
                    if key not in seen:
                        all_keys.append(key)
                        seen.add(key)
    
        output = io.StringIO()
        writer = csv.DictWriter(output, fieldnames=all_keys, lineterminator="\n")
        writer.writeheader()
        writer.writerows(flattened_records)
    
        return output.getvalue()
    
    
    def _flatten_dict(
        d: dict[str, Any], parent_key: str = "", sep: str = "_"
    ) -> dict[str, Any]:
        """
        Flatten a nested dictionary by joining keys with separator.
    
        Args:
            d: Dictionary to flatten
            parent_key: Key from parent level (for recursion)
            sep: Separator to use between nested keys
    
        Returns:
            Flattened dictionary with no nested structures
        """
        items = []
        for k, v in d.items():
            new_key = f"{parent_key}{sep}{k}" if parent_key else k
    
            if isinstance(v, dict):
                # Recursively flatten nested dicts
                items.extend(_flatten_dict(v, new_key, sep=sep).items())
            elif isinstance(v, list):
                # Convert lists to comma-separated strings
                items.append((new_key, str(v)))
            else:
                items.append((new_key, v))
    
        return dict(items)
  • @poly_mcp.tool decorator registers get_aggs as an MCP tool with readOnlyHint annotation.
    @poly_mcp.tool(annotations=ToolAnnotations(readOnlyHint=True))
  • Function signature with type hints defines the input schema and docstring for the get_aggs tool, used by FastMCP for tool schema generation.
    async def get_aggs(
        ticker: str,
        multiplier: int,
        timespan: str,
        from_: Union[str, int, datetime, date],
        to: Union[str, int, datetime, date],
        adjusted: Optional[bool] = None,
        sort: Optional[str] = None,
        limit: Optional[int] = 10,
        params: Optional[Dict[str, Any]] = None,
    ) -> str:
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries full burden. It mentions 'list' (implying read-only) and scope details, but lacks critical behavioral information: authentication requirements, rate limits, pagination (given 'limit' parameter), data freshness, error handling, or output format. For a 9-parameter tool with no annotation coverage, this is insufficient.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Single sentence, front-loaded with core purpose, zero redundant words. Every element ('List aggregate bars', 'for a ticker', 'over a given date range', 'in custom time window sizes') contributes directly to understanding the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given high complexity (9 parameters, 5 required), no annotations, no output schema, and 0% schema description coverage, the description is inadequate. It covers basic purpose but misses parameter explanations, behavioral context, output details, and sibling differentiation needed for effective tool use.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters2/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 0%, so the description must compensate. It only vaguely references 'ticker', 'date range', and 'custom time window sizes' (hinting at 'multiplier' and 'timespan'), leaving 6 other parameters (adjusted, limit, params, sort, from_, to) completely unexplained. The description adds minimal value beyond the schema's parameter names.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('List aggregate bars') and target resource ('for a ticker'), with additional scope details ('over a given date range in custom time window sizes'). It distinguishes from siblings like 'get_daily_open_close_agg' by specifying custom time windows, but doesn't explicitly contrast with 'list_aggs' or other aggregation tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No explicit guidance on when to use this tool versus alternatives like 'get_daily_open_close_agg', 'get_grouped_daily_aggs', or 'list_aggs'. The description implies usage for time-series aggregation with custom windows but provides no context about prerequisites, limitations, or comparison to sibling tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/massive-com/mcp_massive'

If you have feedback or need assistance with the MCP directory API, please join our Discord server