Skip to main content
Glama
danielbres

massive-mcp

by danielbres

get_aggregates

Query stock market aggregate bars (OHLC) by ticker, date range, and bar size, with options for split adjustment and sorting.

Instructions

Aggregated OHLC bars for a stock over a date range.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
tickerYesStock symbol (e.g. "AAPL"). Case-sensitive.
multiplierYesSize of the timespan multiplier (e.g. 5 with timespan="minute" => 5-min bars).
timespanYesBar size: second, minute, hour, day, week, month, quarter, year.
from_YesStart date "YYYY-MM-DD" or millisecond unix timestamp.
toYesEnd date "YYYY-MM-DD" or millisecond unix timestamp.
adjustedNoWhether to adjust for splits. Default true.
sortNo"asc" or "desc" by timestamp.asc
limitNoMax bars (Massive cap 50000). Default 50 to keep responses small.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • The get_aggregates handler: builds the REST path /v2/aggs/ticker/{ticker}/range/{multiplier}/{timespan}/{from_}/{to} and calls the Massive API client.
    async def get_aggregates(
        ticker: str,
        multiplier: int,
        timespan: Timespan,
        from_: str,
        to: str,
        adjusted: bool = True,
        sort: Literal["asc", "desc"] = "asc",
        limit: int = 50,
    ) -> dict[str, Any]:
        """Aggregated OHLC bars for a stock over a date range.
    
        Args:
            ticker: Stock symbol (e.g. "AAPL"). Case-sensitive.
            multiplier: Size of the timespan multiplier (e.g. 5 with timespan="minute" => 5-min bars).
            timespan: Bar size: second, minute, hour, day, week, month, quarter, year.
            from_: Start date "YYYY-MM-DD" or millisecond unix timestamp.
            to: End date "YYYY-MM-DD" or millisecond unix timestamp.
            adjusted: Whether to adjust for splits. Default true.
            sort: "asc" or "desc" by timestamp.
            limit: Max bars (Massive cap 50000). Default 50 to keep responses small.
        """
        path = f"/v2/aggs/ticker/{ticker}/range/{multiplier}/{timespan}/{from_}/{to}"
        return await client.get(path, {"adjusted": str(adjusted).lower(), "sort": sort, "limit": limit})
  • Input parameters and return type for get_aggregates: ticker (str), multiplier (int), timespan (Literal of time units), from_ (str), to (str), adjusted (bool), sort (asc/desc), limit (int). Returns dict[str,Any].
    async def get_aggregates(
        ticker: str,
        multiplier: int,
        timespan: Timespan,
        from_: str,
        to: str,
        adjusted: bool = True,
        sort: Literal["asc", "desc"] = "asc",
        limit: int = 50,
    ) -> dict[str, Any]:
  • Registration via @mcp.tool() decorator inside the register() function, called from server.py line 38 (aggregates.register(mcp, client)).
    def register(mcp: FastMCP, client: MassiveClient) -> None:
        @mcp.tool()
        async def get_aggregates(
            ticker: str,
            multiplier: int,
            timespan: Timespan,
            from_: str,
            to: str,
            adjusted: bool = True,
            sort: Literal["asc", "desc"] = "asc",
            limit: int = 50,
        ) -> dict[str, Any]:
            """Aggregated OHLC bars for a stock over a date range.
    
            Args:
                ticker: Stock symbol (e.g. "AAPL"). Case-sensitive.
                multiplier: Size of the timespan multiplier (e.g. 5 with timespan="minute" => 5-min bars).
                timespan: Bar size: second, minute, hour, day, week, month, quarter, year.
                from_: Start date "YYYY-MM-DD" or millisecond unix timestamp.
                to: End date "YYYY-MM-DD" or millisecond unix timestamp.
                adjusted: Whether to adjust for splits. Default true.
                sort: "asc" or "desc" by timestamp.
                limit: Max bars (Massive cap 50000). Default 50 to keep responses small.
            """
            path = f"/v2/aggs/ticker/{ticker}/range/{multiplier}/{timespan}/{from_}/{to}"
            return await client.get(path, {"adjusted": str(adjusted).lower(), "sort": sort, "limit": limit})
  • MassiveClient.get(): the HTTP helper that makes the actual GET request with retries, auth, and response trimming.
        async def get(
            self, path: str, params: dict[str, Any] | None = None, *, trim: bool = True
        ) -> dict[str, Any]:
            merged: dict[str, Any] = {k: v for k, v in (params or {}).items() if v is not None}
            if self._settings.auth_mode == "query":
                merged["apiKey"] = self._settings.api_key
    
            last_exc: Exception | None = None
            for attempt in range(MAX_RETRIES):
                try:
                    resp = await self._http.get(path, params=merged)
                except httpx.HTTPError as exc:
                    last_exc = exc
                    await asyncio.sleep(2**attempt)
                    continue
    
                if resp.status_code == 429:
                    retry_after = float(resp.headers.get("Retry-After", 2**attempt))
                    await asyncio.sleep(min(retry_after, 30))
                    continue
                if 500 <= resp.status_code < 600 and attempt < MAX_RETRIES - 1:
                    await asyncio.sleep(2**attempt)
                    continue
    
                if resp.status_code == 401:
                    hint = (
                        "auth rejected — verify MASSIVE_API_KEY; "
                        "if you used MASSIVE_AUTH_MODE=bearer, try 'query' (or vice versa)"
                    )
                    raise MassiveAPIError(401, hint, _strip_secrets(str(resp.request.url)))
    
                try:
                    data = resp.json()
                except ValueError:
                    data = {"raw": resp.text}
    
                if not resp.is_success:
                    msg = data.get("error") or data.get("message") or resp.reason_phrase or "request failed"
                    raise MassiveAPIError(resp.status_code, str(msg), _strip_secrets(str(resp.request.url)))
    
                return _trim(data) if trim else data
    
            raise MassiveAPIError(0, f"network error after {MAX_RETRIES} retries: {last_exc}", path)
    
    
    def _trim(data: dict[str, Any]) -> dict[str, Any]:
        """If `results` is a huge array, truncate and surface a hint to paginate."""
        results = data.get("results")
        if isinstance(results, list) and len(results) > TRIM_THRESHOLD:
            kept = results[:TRIM_THRESHOLD]
            data = dict(data)
            data["results"] = kept
            data["_truncated_note"] = (
                f"response had {len(results)} items; truncated to {TRIM_THRESHOLD}. "
                "Re-call with a tighter `limit` or use `cursor`/`next_url` to page."
            )
        if "next_url" in data and data.get("next_url"):
            cursor = _extract_cursor(data["next_url"])
            if cursor:
                data["next_cursor"] = cursor
        return data
    
    
    def _extract_cursor(next_url: str) -> str | None:
        parts = urlsplit(next_url)
        for kv in parts.query.split("&"):
            if kv.startswith("cursor="):
                return kv.split("=", 1)[1]
        return None
  • Timespan type alias used by get_aggregates schema.
    Timespan = Literal["second", "minute", "hour", "day", "week", "month", "quarter", "year"]
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description should disclose safety and side effects; it only says 'aggregated OHLC bars,' omitting details like data adjustment or rate limits.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single short sentence, which is concise but too sparse for an 8-parameter tool; it lacks structure and context.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Despite having an output schema, the description is incomplete—it does not explain the returned data format, edge cases, or limitations, leaving gaps for a moderately complex tool.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the description adds no extra parameter meaning beyond the schema; baseline 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states it provides aggregated OHLC bars for a stock over a date range, differentiating from single-bar tools like get_previous_close.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance on when to use this tool vs siblings like get_quotes or get_previous_close; context-dependent selection is left to the agent.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/danielbres/Massive-MCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server