compare_forecast_actual
Analyze electricity demand forecast accuracy by comparing predicted vs actual values and calculating error metrics (MAE, RMSE) for specific dates.
Instructions
Compare forecasted vs actual electricity demand.
Calculates forecast accuracy metrics (error, MAE, RMSE) for demand predictions.
Args: date: Date in YYYY-MM-DD format
Returns: JSON string with forecast comparison and accuracy metrics.
Examples: Compare forecast accuracy for Oct 8: >>> await compare_forecast_actual("2025-10-08")
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| date | Yes |
Implementation Reference
- Handler function for the 'compare_forecast_actual' tool. Fetches forecast and actual demand data using REE API, computes errors (absolute, percentage, MAE, RMSE, MAPE), and returns comparison metrics. Registered via @mcp.tool() decorator.@mcp.tool() async def compare_forecast_actual(date: str) -> str: """Compare forecasted vs actual electricity demand. Calculates forecast accuracy metrics (error, MAE, RMSE) for demand predictions. Args: date: Date in YYYY-MM-DD format Returns: JSON string with forecast comparison and accuracy metrics. Examples: Compare forecast accuracy for Oct 8: >>> await compare_forecast_actual("2025-10-08") """ try: start_date, end_date = DateTimeHelper.build_day_range(date) async with ToolExecutor() as executor: use_case = executor.create_get_indicator_data_use_case() # Get forecast data forecast_request = GetIndicatorDataRequest( indicator_id=IndicatorIDs.DEMAND_FORECAST.id, start_date=start_date, end_date=end_date, time_granularity="hour", ) forecast_response = await use_case.execute(forecast_request) forecast_data = forecast_response.model_dump() # Get actual data actual_request = GetIndicatorDataRequest( indicator_id=IndicatorIDs.REAL_DEMAND_PENINSULAR.id, start_date=start_date, end_date=end_date, time_granularity="hour", ) actual_response = await use_case.execute(actual_request) actual_data = actual_response.model_dump() # Compare values forecast_values = forecast_data.get("values", []) actual_values = actual_data.get("values", []) comparisons = [] errors = [] absolute_errors = [] squared_errors = [] for forecast, actual in zip(forecast_values, actual_values, strict=False): forecast_mw = forecast["value"] actual_mw = actual["value"] error_mw = forecast_mw - actual_mw error_pct = (error_mw / actual_mw * 100) if actual_mw > 0 else 0 comparisons.append( { "datetime": forecast["datetime"], "forecast_mw": forecast_mw, "actual_mw": actual_mw, "error_mw": round(error_mw, 2), "error_percentage": round(error_pct, 2), } ) errors.append(error_mw) absolute_errors.append(abs(error_mw)) squared_errors.append(error_mw**2) # Calculate accuracy metrics accuracy_metrics = {} if errors: mae = sum(absolute_errors) / len(absolute_errors) rmse = (sum(squared_errors) / len(squared_errors)) ** 0.5 mean_error = sum(errors) / len(errors) mape = sum( abs(e / a["value"]) * 100 for e, a in zip(errors, actual_values, strict=False) ) / len(errors) accuracy_metrics = { "mean_absolute_error_mw": round(mae, 2), "root_mean_squared_error_mw": round(rmse, 2), "mean_error_mw": round(mean_error, 2), "mean_absolute_percentage_error": round(mape, 2), "bias": ( "overforecast" if mean_error > 0 else "underforecast" if mean_error < 0 else "unbiased" ), } result = { "date": date, "comparisons": comparisons, "accuracy_metrics": accuracy_metrics, } return ResponseFormatter.success(result) except Exception as e: return ResponseFormatter.unexpected_error(e, context="Error comparing forecast")
- src/ree_mcp/interface/mcp_server.py:410-410 (registration)Registration of the 'compare_forecast_actual' tool using FastMCP's @mcp.tool() decorator.@mcp.tool()