compare_scenarios
Compare two simulation scenarios to analyze differences in traffic flow, emissions, travel times, and other key metrics for urban planning decisions.
Instructions
Performs detailed comparative analysis between two simulation scenarios, highlighting differences in traffic flow, emissions, travel times, and other key metrics.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| simulation_id1 | Yes | ||
| simulation_id2 | Yes | ||
| scenario1_name | No | Scenario 1 | |
| scenario2_name | No | Scenario 2 | |
| ctx | No |
Implementation Reference
- src/fujitsu_sdt_mcp/server.py:566-606 (handler)Main handler function for the compare_scenarios tool. Decorated with @mcp.tool() to register as an MCP tool. Retrieves metrics for two simulation IDs, computes differences in CO2, travel time, and traffic count, and returns a structured comparison.
@mcp.tool() async def compare_scenarios(simulation_id1: str, simulation_id2: str, scenario1_name: str = "Scenario 1", scenario2_name: str = "Scenario 2", ctx: Optional[Context] = None) -> Dict[str, Any]: """Performs detailed comparative analysis between two simulation scenarios, highlighting differences in traffic flow, emissions, travel times, and other key metrics.""" try: if not simulation_id1 or not simulation_id2: return format_api_error(400, "Two simulation IDs required") async with await get_http_client() as client: api_client = FujitsuSocialDigitalTwinClient(client) metrics_result1 = await api_client.get_metrics(simulation_id1) metrics_result2 = await api_client.get_metrics(simulation_id2) if not metrics_result1.get("success") or not metrics_result2.get("success"): return format_api_error(500, "Metric retrieval failed") comparison = { "scenario1": { "name": scenario1_name, "metrics": metrics_result1.get("data", {}).get("metrics", {}) }, "scenario2": { "name": scenario2_name, "metrics": metrics_result2.get("data", {}).get("metrics", {}) }, "comparison": { "timestamp": datetime.now().isoformat(), "co2Difference": metrics_result2.get('data', {}).get('metrics', {}).get('co2', 0) - metrics_result1.get('data', {}).get('metrics', {}).get('co2', 0), "travelTimeDifference": metrics_result2.get('data', {}).get('metrics', {}).get('travelTime', 0) - metrics_result1.get('data', {}).get('metrics', {}).get('travelTime', 0), "trafficCountDifference": metrics_result2.get('data', {}).get('metrics', {}).get('trafficCountTotal', 0) - metrics_result1.get('data', {}).get('metrics', {}).get('trafficCountTotal', 0) } } return comparison except Exception as e: logger.error(f"Comparison error: {e}") return format_api_error(500, str(e)) - src/fujitsu_sdt_mcp/server.py:566-566 (registration)The @mcp.tool() decorator registers compare_scenarios as an MCP tool on the FastMCP instance.
@mcp.tool() - src/fujitsu_sdt_mcp/server.py:40-45 (helper)Helper used by compare_scenarios to format error responses.
def format_api_error(status_code: int, error_detail: str) -> Dict[str, Any]: return { "success": False, "status_code": status_code, "error": error_detail } - The FujitsuSocialDigitalTwinClient.get_metrics method called by compare_scenarios to fetch metrics for each simulation.
async def get_metrics(self, simulation_id: str) -> Dict[str, Any]: try: response = await self.client.get(f"/api/metrics/{simulation_id}") response.raise_for_status() return format_simulation_result(response.json()) except httpx.HTTPStatusError as e: logger.error(f"Metrics retrieval error: {e}") return format_api_error(e.response.status_code, str(e)) except Exception as e: logger.error(f"Unexpected error retrieving metrics: {e}") return format_api_error(500, str(e))