compare_scenarios
Analyze and compare two simulation scenarios to evaluate differences in traffic flow, emissions, travel times, and other metrics. Use this tool to assess the impact of changes in digital twin simulations.
Instructions
Performs detailed comparative analysis between two simulation scenarios, highlighting differences in traffic flow, emissions, travel times, and other key metrics.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| ctx | No | ||
| scenario1_name | No | Scenario 1 | |
| scenario2_name | No | Scenario 2 | |
| simulation_id1 | Yes | ||
| simulation_id2 | Yes |
Implementation Reference
- src/fujitsu_sdt_mcp/server.py:566-606 (handler)The core implementation of the 'compare_scenarios' tool. This async function, decorated with @mcp.tool(), fetches metrics for two specified simulations using the FujitsuSocialDigitalTwinClient, compares key metrics (CO2 emissions, travel time, traffic count), and returns a structured comparison dictionary. The @mcp.tool() decorator also serves as the registration mechanism in FastMCP.@mcp.tool() async def compare_scenarios(simulation_id1: str, simulation_id2: str, scenario1_name: str = "Scenario 1", scenario2_name: str = "Scenario 2", ctx: Optional[Context] = None) -> Dict[str, Any]: """Performs detailed comparative analysis between two simulation scenarios, highlighting differences in traffic flow, emissions, travel times, and other key metrics.""" try: if not simulation_id1 or not simulation_id2: return format_api_error(400, "Two simulation IDs required") async with await get_http_client() as client: api_client = FujitsuSocialDigitalTwinClient(client) metrics_result1 = await api_client.get_metrics(simulation_id1) metrics_result2 = await api_client.get_metrics(simulation_id2) if not metrics_result1.get("success") or not metrics_result2.get("success"): return format_api_error(500, "Metric retrieval failed") comparison = { "scenario1": { "name": scenario1_name, "metrics": metrics_result1.get("data", {}).get("metrics", {}) }, "scenario2": { "name": scenario2_name, "metrics": metrics_result2.get("data", {}).get("metrics", {}) }, "comparison": { "timestamp": datetime.now().isoformat(), "co2Difference": metrics_result2.get('data', {}).get('metrics', {}).get('co2', 0) - metrics_result1.get('data', {}).get('metrics', {}).get('co2', 0), "travelTimeDifference": metrics_result2.get('data', {}).get('metrics', {}).get('travelTime', 0) - metrics_result1.get('data', {}).get('metrics', {}).get('travelTime', 0), "trafficCountDifference": metrics_result2.get('data', {}).get('metrics', {}).get('trafficCountTotal', 0) - metrics_result1.get('data', {}).get('metrics', {}).get('trafficCountTotal', 0) } } return comparison except Exception as e: logger.error(f"Comparison error: {e}") return format_api_error(500, str(e))