get_one_day_power_data
Retrieve daily power consumption and generation data from Alpha ESS solar systems with hourly intervals and summary statistics for energy monitoring.
Instructions
Get one day's power data for a specific Alpha ESS system.
Returns structured timeseries data with hourly intervals and summary statistics.
If no serial provided, auto-selects if only one system exists.
Args:
query_date: Date in YYYY-MM-DD format
serial: The serial number of the Alpha ESS system (optional)
Returns:
dict: Enhanced response with structured timeseries data and analytics
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query_date | Yes | ||
| serial | No |
Implementation Reference
- main.py:520-590 (handler)The @mcp.tool()-decorated handler function that implements the core logic: auto-selects serial if needed, fetches raw power data via AlphaESS client.getOneDayPowerBySn(), structures it using helper into TimeSeries, and wraps in enhanced response.@mcp.tool() async def get_one_day_power_data(query_date: str, serial: Optional[str] = None) -> dict[str, Any]: """ Get one day's power data for a specific Alpha ESS system. Returns structured timeseries data with hourly intervals and summary statistics. If no serial provided, auto-selects if only one system exists. Args: query_date: Date in YYYY-MM-DD format serial: The serial number of the Alpha ESS system (optional) Returns: dict: Enhanced response with structured timeseries data and analytics """ client = None try: # Auto-discover serial if not provided if not serial: serial_info = await get_default_serial() if not serial_info['success'] or not serial_info['serial']: return create_enhanced_response( success=False, message=f"Serial auto-discovery failed: {serial_info['message']}", raw_data=None, data_type="timeseries", metadata={"available_systems": serial_info.get('systems', [])} ) serial = serial_info['serial'] app_id, app_secret = get_alpha_credentials() client = alphaess(app_id, app_secret) # Get one day power data power_data = await client.getOneDayPowerBySn(serial, query_date) # Structure the timeseries data structured = structure_timeseries_data(power_data, serial) return create_enhanced_response( success=True, message=f"Successfully retrieved power data for {serial} on {query_date}", raw_data=None, # Don't include raw data to reduce verbosity data_type="timeseries", serial_used=serial, metadata={ "query_date": query_date, "interval": "1 hour", "total_records": len(structured.series) if structured else 0, "units": {"power": "W", "soc": "%", "energy": "kWh"} }, structured_data=structured ) except ValueError as e: return create_enhanced_response( success=False, message=f"Configuration or parameter error: {str(e)}", raw_data=None, data_type="timeseries" ) except Exception as e: return create_enhanced_response( success=False, message=f"Error retrieving one day power data: {str(e)}", raw_data=None, data_type="timeseries" ) finally: if client: await client.close()
- models.py:5-31 (schema)Dataclasses that define the structured output schema for the timeseries data returned by the tool.@dataclass class TimeSeriesEntry: timestamp: str solar_power: int load_power: int battery_soc: float grid_feedin: int grid_import: int ev_charging: int @dataclass class TimeSeriesSummary: total_records: int interval: str time_span_hours: int solar: Dict[str, Any] battery: Dict[str, Any] grid: Dict[str, Any] load: Dict[str, Any] @dataclass class TimeSeries: series: List[TimeSeriesEntry] summary: TimeSeriesSummary
- main.py:55-133 (helper)Supporting function that processes raw API data into aggregated hourly TimeSeriesEntry records and computes comprehensive TimeSeriesSummary statistics, used directly by the handler.def structure_timeseries_data(raw_data: List[Dict], serial: str) -> TimeSeries: """Convert inefficient timeseries to structured format with hourly aggregation""" if not raw_data: return TimeSeries(series=[], summary=TimeSeriesSummary(total_records=0, interval="1 hour", time_span_hours=0, solar={}, battery={}, grid={}, load={})) # Group data by hour hourly_data = {} for record in raw_data: timestamp = record.get('uploadTime', '') if not timestamp: continue # Extract hour from timestamp (assumes format like "2024-03-21 14:30:00") hour = timestamp[:13] + ":00:00" # Truncate to hour if hour not in hourly_data: hourly_data[hour] = { "solar_power": [], "load_power": [], "battery_soc": [], "grid_feedin": [], "grid_import": [], "ev_charging": [] } # Collect all values for this hour hourly_data[hour]["solar_power"].append(record.get('ppv', 0)) hourly_data[hour]["load_power"].append(record.get('load', 0)) hourly_data[hour]["battery_soc"].append(record.get('cbat', 0)) hourly_data[hour]["grid_feedin"].append(record.get('feedIn', 0)) hourly_data[hour]["grid_import"].append(record.get('gridCharge', 0)) hourly_data[hour]["ev_charging"].append(record.get('pchargingPile', 0)) # Convert hourly data to averages series_entries = [] for hour, data in sorted(hourly_data.items()): series_entries.append(TimeSeriesEntry( timestamp=hour, solar_power=round(sum(data["solar_power"]) / len(data["solar_power"])) if data["solar_power"] else 0, load_power=round(sum(data["load_power"]) / len(data["load_power"])) if data["load_power"] else 0, battery_soc=round(sum(data["battery_soc"]) / len(data["battery_soc"]), 1) if data["battery_soc"] else 0, grid_feedin=round(sum(data["grid_feedin"]) / len(data["grid_feedin"])) if data["grid_feedin"] else 0, grid_import=round(sum(data["grid_import"]) / len(data["grid_import"])) if data["grid_import"] else 0, ev_charging=round(sum(data["ev_charging"]) / len(data["ev_charging"])) if data["ev_charging"] else 0 )) # Calculate summary statistics using hourly averages solar_values = [r.solar_power for r in series_entries] load_values = [r.load_power for r in series_entries] battery_values = [r.battery_soc for r in series_entries] feedin_values = [r.grid_feedin for r in series_entries] summary = TimeSeriesSummary( total_records=len(series_entries), interval="1 hour", time_span_hours=len(series_entries), solar={ "peak_power": max(solar_values) if solar_values else 0, "avg_power": round(sum(solar_values) / len(solar_values)) if solar_values else 0, "total_generation_kwh": round(sum(solar_values) / 1000, 2) # Convert W to kWh }, battery={ "max_soc": max(battery_values) if battery_values else 0, "min_soc": min(battery_values) if battery_values else 0, "avg_soc": round(sum(battery_values) / len(battery_values), 1) if battery_values else 0 }, grid={ "total_feedin_kwh": round(sum(feedin_values) / 1000, 2), "peak_feedin": max(feedin_values) if feedin_values else 0 }, load={ "peak_power": max(load_values) if load_values else 0, "avg_power": round(sum(load_values) / len(load_values)) if load_values else 0, "total_consumption_kwh": round(sum(load_values) / 1000, 2) } ) return TimeSeries(series=series_entries, summary=summary)
- main.py:24-52 (helper)Utility function to wrap tool responses in a consistent enhanced format with structured data, metadata, and success indicators, used by the handler.def create_enhanced_response( success: bool, message: str, raw_data: Any, data_type: DataType, serial_used: Optional[str] = None, metadata: Optional[Dict[str, Any]] = None, structured_data: Optional[Any] = None ) -> Dict[str, Any]: """Create a standardized response with enhanced structure""" response = { "success": success, "message": message, "data_type": data_type, "metadata": { "timestamp": datetime.now().isoformat(), **({"serial_used": serial_used} if serial_used else {}), **(metadata or {}) }, "data": raw_data } if structured_data is not None: if is_dataclass(structured_data): response["structured"] = asdict(structured_data) else: response["structured"] = structured_data return response