Skip to main content
Glama
michaelkrasa

Alpha ESS MCP Server

by michaelkrasa

get_one_day_power_data

Retrieve daily power consumption and generation data from Alpha ESS solar systems with hourly intervals and summary statistics for energy monitoring.

Instructions

Get one day's power data for a specific Alpha ESS system.
Returns structured timeseries data with hourly intervals and summary statistics.
If no serial provided, auto-selects if only one system exists.

Args:
    query_date: Date in YYYY-MM-DD format
    serial: The serial number of the Alpha ESS system (optional)
    
Returns:
    dict: Enhanced response with structured timeseries data and analytics

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
query_dateYes
serialNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Implementation Reference

  • main.py:520-590 (handler)
    The @mcp.tool()-decorated handler function that implements the core logic: auto-selects serial if needed, fetches raw power data via AlphaESS client.getOneDayPowerBySn(), structures it using helper into TimeSeries, and wraps in enhanced response.
    @mcp.tool()
    async def get_one_day_power_data(query_date: str, serial: Optional[str] = None) -> dict[str, Any]:
        """
        Get one day's power data for a specific Alpha ESS system.
        Returns structured timeseries data with hourly intervals and summary statistics.
        If no serial provided, auto-selects if only one system exists.
        
        Args:
            query_date: Date in YYYY-MM-DD format
            serial: The serial number of the Alpha ESS system (optional)
            
        Returns:
            dict: Enhanced response with structured timeseries data and analytics
        """
        client = None
        try:
            # Auto-discover serial if not provided
            if not serial:
                serial_info = await get_default_serial()
                if not serial_info['success'] or not serial_info['serial']:
                    return create_enhanced_response(
                        success=False,
                        message=f"Serial auto-discovery failed: {serial_info['message']}",
                        raw_data=None,
                        data_type="timeseries",
                        metadata={"available_systems": serial_info.get('systems', [])}
                    )
                serial = serial_info['serial']
    
            app_id, app_secret = get_alpha_credentials()
            client = alphaess(app_id, app_secret)
    
            # Get one day power data
            power_data = await client.getOneDayPowerBySn(serial, query_date)
    
            # Structure the timeseries data
            structured = structure_timeseries_data(power_data, serial)
    
            return create_enhanced_response(
                success=True,
                message=f"Successfully retrieved power data for {serial} on {query_date}",
                raw_data=None,  # Don't include raw data to reduce verbosity
                data_type="timeseries",
                serial_used=serial,
                metadata={
                    "query_date": query_date,
                    "interval": "1 hour",
                    "total_records": len(structured.series) if structured else 0,
                    "units": {"power": "W", "soc": "%", "energy": "kWh"}
                },
                structured_data=structured
            )
    
        except ValueError as e:
            return create_enhanced_response(
                success=False,
                message=f"Configuration or parameter error: {str(e)}",
                raw_data=None,
                data_type="timeseries"
            )
        except Exception as e:
            return create_enhanced_response(
                success=False,
                message=f"Error retrieving one day power data: {str(e)}",
                raw_data=None,
                data_type="timeseries"
            )
        finally:
            if client:
                await client.close()
  • Dataclasses that define the structured output schema for the timeseries data returned by the tool.
    @dataclass
    class TimeSeriesEntry:
        timestamp: str
        solar_power: int
        load_power: int
        battery_soc: float
        grid_feedin: int
        grid_import: int
        ev_charging: int
    
    
    @dataclass
    class TimeSeriesSummary:
        total_records: int
        interval: str
        time_span_hours: int
        solar: Dict[str, Any]
        battery: Dict[str, Any]
        grid: Dict[str, Any]
        load: Dict[str, Any]
    
    
    @dataclass
    class TimeSeries:
        series: List[TimeSeriesEntry]
        summary: TimeSeriesSummary
  • Supporting function that processes raw API data into aggregated hourly TimeSeriesEntry records and computes comprehensive TimeSeriesSummary statistics, used directly by the handler.
    def structure_timeseries_data(raw_data: List[Dict], serial: str) -> TimeSeries:
        """Convert inefficient timeseries to structured format with hourly aggregation"""
        if not raw_data:
            return TimeSeries(series=[], summary=TimeSeriesSummary(total_records=0, interval="1 hour", time_span_hours=0, solar={}, battery={}, grid={}, load={}))
    
        # Group data by hour
        hourly_data = {}
        for record in raw_data:
            timestamp = record.get('uploadTime', '')
            if not timestamp:
                continue
    
            # Extract hour from timestamp (assumes format like "2024-03-21 14:30:00")
            hour = timestamp[:13] + ":00:00"  # Truncate to hour
    
            if hour not in hourly_data:
                hourly_data[hour] = {
                    "solar_power": [],
                    "load_power": [],
                    "battery_soc": [],
                    "grid_feedin": [],
                    "grid_import": [],
                    "ev_charging": []
                }
    
            # Collect all values for this hour
            hourly_data[hour]["solar_power"].append(record.get('ppv', 0))
            hourly_data[hour]["load_power"].append(record.get('load', 0))
            hourly_data[hour]["battery_soc"].append(record.get('cbat', 0))
            hourly_data[hour]["grid_feedin"].append(record.get('feedIn', 0))
            hourly_data[hour]["grid_import"].append(record.get('gridCharge', 0))
            hourly_data[hour]["ev_charging"].append(record.get('pchargingPile', 0))
    
        # Convert hourly data to averages
        series_entries = []
        for hour, data in sorted(hourly_data.items()):
            series_entries.append(TimeSeriesEntry(
                timestamp=hour,
                solar_power=round(sum(data["solar_power"]) / len(data["solar_power"])) if data["solar_power"] else 0,
                load_power=round(sum(data["load_power"]) / len(data["load_power"])) if data["load_power"] else 0,
                battery_soc=round(sum(data["battery_soc"]) / len(data["battery_soc"]), 1) if data["battery_soc"] else 0,
                grid_feedin=round(sum(data["grid_feedin"]) / len(data["grid_feedin"])) if data["grid_feedin"] else 0,
                grid_import=round(sum(data["grid_import"]) / len(data["grid_import"])) if data["grid_import"] else 0,
                ev_charging=round(sum(data["ev_charging"]) / len(data["ev_charging"])) if data["ev_charging"] else 0
            ))
    
        # Calculate summary statistics using hourly averages
        solar_values = [r.solar_power for r in series_entries]
        load_values = [r.load_power for r in series_entries]
        battery_values = [r.battery_soc for r in series_entries]
        feedin_values = [r.grid_feedin for r in series_entries]
    
        summary = TimeSeriesSummary(
            total_records=len(series_entries),
            interval="1 hour",
            time_span_hours=len(series_entries),
            solar={
                "peak_power": max(solar_values) if solar_values else 0,
                "avg_power": round(sum(solar_values) / len(solar_values)) if solar_values else 0,
                "total_generation_kwh": round(sum(solar_values) / 1000, 2)  # Convert W to kWh
            },
            battery={
                "max_soc": max(battery_values) if battery_values else 0,
                "min_soc": min(battery_values) if battery_values else 0,
                "avg_soc": round(sum(battery_values) / len(battery_values), 1) if battery_values else 0
            },
            grid={
                "total_feedin_kwh": round(sum(feedin_values) / 1000, 2),
                "peak_feedin": max(feedin_values) if feedin_values else 0
            },
            load={
                "peak_power": max(load_values) if load_values else 0,
                "avg_power": round(sum(load_values) / len(load_values)) if load_values else 0,
                "total_consumption_kwh": round(sum(load_values) / 1000, 2)
            }
        )
    
        return TimeSeries(series=series_entries, summary=summary)
  • main.py:24-52 (helper)
    Utility function to wrap tool responses in a consistent enhanced format with structured data, metadata, and success indicators, used by the handler.
    def create_enhanced_response(
            success: bool,
            message: str,
            raw_data: Any,
            data_type: DataType,
            serial_used: Optional[str] = None,
            metadata: Optional[Dict[str, Any]] = None,
            structured_data: Optional[Any] = None
    ) -> Dict[str, Any]:
        """Create a standardized response with enhanced structure"""
        response = {
            "success": success,
            "message": message,
            "data_type": data_type,
            "metadata": {
                "timestamp": datetime.now().isoformat(),
                **({"serial_used": serial_used} if serial_used else {}),
                **(metadata or {})
            },
            "data": raw_data
        }
    
        if structured_data is not None:
            if is_dataclass(structured_data):
                response["structured"] = asdict(structured_data)
            else:
                response["structured"] = structured_data
    
        return response
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden and does well by disclosing key behavioral traits: the auto-select behavior when only one system exists, the structured timeseries format with hourly intervals, and inclusion of summary statistics. It doesn't mention rate limits, authentication requirements, or error conditions, but provides substantial operational context.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is perfectly structured and concise: purpose statement first, output format second, behavioral note third, then parameter documentation. Every sentence earns its place with zero waste. The Args/Returns sections are clearly delineated but not redundant.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but has output schema), the description is complete enough. It explains what the tool does, when to use it, parameter semantics, and output characteristics. The output schema exists, so the description doesn't need to detail return values beyond the high-level 'Enhanced response with structured timeseries data and analytics' note.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The schema description coverage is 0%, so the description must compensate. It successfully adds meaning for both parameters: specifying the date format (YYYY-MM-DD) for query_date and explaining that serial is optional with auto-selection behavior when omitted. This provides crucial semantic information beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the specific action ('Get one day's power data'), target resource ('Alpha ESS system'), and output format ('structured timeseries data with hourly intervals and summary statistics'). It distinguishes from siblings like 'get_last_power_data' by specifying daily granularity and from 'get_one_date_energy_data' by focusing on power rather than energy data.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context for when to use this tool: for retrieving power data for a specific date and system. It mentions the auto-select behavior when no serial is provided, which is helpful guidance. However, it doesn't explicitly state when NOT to use it or name alternatives among siblings like 'get_last_power_data' for recent data.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/michaelkrasa/alpha-ess-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server