Skip to main content
Glama

get_trend_data_es

Analyze health data trends over time by aggregating specific record types (e.g., steps, heart rate) into daily, weekly, monthly, or yearly intervals. Returns statistics like average, min, max, and count for each period.

Instructions

Get trend data for a specific health record type over time using Elasticsearch date histogram aggregation.

Parameters:

  • record_type: The type of health record to analyze (e.g., "HKQuantityTypeIdentifierStepCount")

  • interval: Time interval for aggregation.

  • date_from, date_to: Optional ISO8601 date strings for filtering date range

Returns:

  • record_type: The analyzed record type

  • interval: The time interval used

  • trend_data: List of time buckets with statistics for each period:

    • date: The time period (ISO string)

    • avg_value: Average value for the period

    • min_value: Minimum value for the period

    • max_value: Maximum value for the period

    • count: Number of records in the period

Notes for LLMs:

  • Use this to analyze trends, patterns, and seasonal variations in health data

  • The function automatically handles date filtering if date_from/date_to are provided

  • IMPORTANT - interval must be one of: "day", "week", "month", or "year". Do not use other values.

  • Do not guess, auto-fill, or assume any missing data.

  • When asked for medical advice, try to use my data from ElasticSearch first.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
date_fromNo
date_toNo
intervalNomonth
record_typeYes

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • MCP tool handler for get_trend_data_es. Registers the tool with the ES Reader MCP router and delegates execution to the underlying logic function.
    @es_reader_router.tool
    def get_trend_data_es(
        record_type: RecordType | str,
        interval: IntervalType = "month",
        date_from: str | None = None,
        date_to: str | None = None,
    ) -> dict[str, Any]:
        """
        Get trend data for a specific health record type over time using
        Elasticsearch date histogram aggregation.
    
        Parameters:
        - record_type: The type of health record to analyze (e.g., "HKQuantityTypeIdentifierStepCount")
        - interval: Time interval for aggregation.
        - date_from, date_to: Optional ISO8601 date strings for filtering date range
    
        Returns:
        - record_type: The analyzed record type
        - device: The device on which the data was recorded
        - interval: The time interval used
        - trend_data: List of time buckets with statistics for each period:
          * date: The time period (ISO string)
          * value_sum: Sum of values for the period
          * avg_value: Average value for the period
          * min_value: Minimum value for the period
          * max_value: Maximum value for the period
          * count: Number of records in the period
    
        Notes for LLMs:
        - Use this to analyze trends, patterns, and seasonal variations in health data
        - Keep in mind that when there is data from multiple devices spanning the same
          time period, there is a possibility of data being duplicated. Inform the user
          of this possibility if you see multiple devices in the same time period.
        - If a user asks you to sum up some values from their health records, DO NOT
          search for records and write a script to sum them, instead, use this tool:
          if they ask to sum data from a year, use this tool with date_from set as the
          beginning of the year and date_to as the end of the year, with an interval
          of 'year'
        - The function automatically handles date filtering if date_from/date_to are provided
        - IMPORTANT - interval must be one of: "day", "week", "month", or "year".
          Do not use other values.
        - Do not guess, autofill, or assume any missing data.
        - If there are multiple databases available (DuckDB, ClickHouse, Elasticsearch):
          first, ask the user which one he wants to use. DO NOT call any tools before
          the user specifies his intent.
        - If the user decides on an option, only use tools from this database,
          do not switch over to another until the user specifies that he wants
          to use a different one. You do not have to keep asking whether
          the user wants to use the same database that he used before.
        - If there is only one database available (DuckDB, ClickHouse, Elasticsearch):
          you can use the tools from this database without the user specifying it.
        """
        try:
            return get_trend_data_logic(record_type, interval, date_from, date_to)
        except Exception as e:
            return {"error": f"Failed to get trend data: {str(e)}"}
  • Core implementation logic for retrieving trend data using Elasticsearch date_histogram aggregation, including query building, execution, and response processing.
    def get_trend_data_logic(
        record_type: RecordType | str,
        interval: IntervalType = "month",
        date_from: str | None = None,
        date_to: str | None = None,
    ) -> dict[str, Any]:
        must_conditions = [{"match": {"type": record_type}}]
        if date_from or date_to:
            range_cond = _build_range_condition("dateComponents", date_from, date_to)
            if range_cond:
                must_conditions.append(range_cond)
        query = {
            "size": 0,
            "query": {"bool": {"must": must_conditions}},
            "aggs": {
                "trend_over_time": {
                    "date_histogram": {"field": "dateComponents", "calendar_interval": interval},
                    "aggs": {
                        "avg_value": {"avg": {"field": "value"}},
                        "min_value": {"min": {"field": "value"}},
                        "max_value": {"max": {"field": "value"}},
                        "value_sum": {"sum": {"field": "value"}},
                        "count": {"value_count": {"field": "value"}},
                    },
                },
            },
        }
        response = _run_es_query(query)
        buckets = response["aggregations"]["trend_over_time"]["buckets"]
        trend_data = []
        for bucket in buckets:
            trend_data.append(
                {
                    "date": bucket["key_as_string"],
                    "avg_value": bucket["avg_value"]["value"],
                    "min_value": bucket["min_value"]["value"],
                    "max_value": bucket["max_value"]["value"],
                    "value_sum": bucket["value_sum"]["value"],
                    "count": bucket["count"]["value"],
                },
            )
        return {"record_type": record_type, "interval": interval, "trend_data": trend_data}
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It does well by explaining: 'The function automatically handles date filtering if date_from/date_to are provided' (automation behavior), 'IMPORTANT - interval must be one of: "day", "week", "month", or "year". Do not use other values.' (validation constraints), and 'Do not guess, auto-fill, or assume any missing data.' (error handling). It doesn't mention rate limits or authentication needs, but covers key operational behaviors.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections (Parameters, Returns, Notes for LLMs) and front-loaded purpose. Most sentences earn their place, though the 'When asked for medical advice' note feels slightly tangential. Overall efficient with minimal waste.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's complexity (4 parameters, trend analysis), no annotations, and the presence of an output schema, the description is remarkably complete. It covers purpose, usage, parameters, return format (though output schema exists), behavioral constraints, and LLM-specific guidance. Nothing essential is missing for effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters5/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description fully compensates by providing detailed parameter semantics. It explains: 'record_type: The type of health record to analyze (e.g., "HKQuantityTypeIdentifierStepCount")' with an example, 'interval: Time interval for aggregation' with allowed values, and 'date_from, date_to: Optional ISO8601 date strings for filtering date range' with format and optionality. This adds substantial meaning beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Get trend data for a specific health record type over time using Elasticsearch date histogram aggregation.' It specifies both the verb ('Get trend data') and resource ('health record type'), and distinguishes it from siblings by mentioning Elasticsearch aggregation and trend analysis, unlike summary or search tools.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides explicit usage guidance: 'Use this to analyze trends, patterns, and seasonal variations in health data' and 'When asked for medical advice, try to use my data from ElasticSearch first.' It clearly indicates when to use this tool (for trend analysis) versus alternatives like summary or search tools.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/the-momentum/apple-health-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server