Skip to main content
Glama
3a3

Fujitsu Social Digital Twin MCP Server

by 3a3

compare_scenarios

Compare two simulation scenarios to analyze differences in traffic flow, emissions, travel times, and other key metrics for urban planning decisions.

Instructions

Performs detailed comparative analysis between two simulation scenarios, highlighting differences in traffic flow, emissions, travel times, and other key metrics.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
simulation_id1Yes
simulation_id2Yes
scenario1_nameNoScenario 1
scenario2_nameNoScenario 2
ctxNo

Implementation Reference

  • Main handler function for the compare_scenarios tool. Decorated with @mcp.tool() to register as an MCP tool. Retrieves metrics for two simulation IDs, computes differences in CO2, travel time, and traffic count, and returns a structured comparison.
    @mcp.tool()
    async def compare_scenarios(simulation_id1: str, simulation_id2: str, 
                         scenario1_name: str = "Scenario 1", scenario2_name: str = "Scenario 2",
                         ctx: Optional[Context] = None) -> Dict[str, Any]:
        """Performs detailed comparative analysis between two simulation scenarios, highlighting differences 
        in traffic flow, emissions, travel times, and other key metrics."""
        try:
            if not simulation_id1 or not simulation_id2:
                return format_api_error(400, "Two simulation IDs required")
            
            async with await get_http_client() as client:
                api_client = FujitsuSocialDigitalTwinClient(client)
                metrics_result1 = await api_client.get_metrics(simulation_id1)
                metrics_result2 = await api_client.get_metrics(simulation_id2)
            
            if not metrics_result1.get("success") or not metrics_result2.get("success"):
                return format_api_error(500, "Metric retrieval failed")
            
            comparison = {
                "scenario1": {
                    "name": scenario1_name,
                    "metrics": metrics_result1.get("data", {}).get("metrics", {})
                },
                "scenario2": {
                    "name": scenario2_name,
                    "metrics": metrics_result2.get("data", {}).get("metrics", {})
                },
                "comparison": {
                    "timestamp": datetime.now().isoformat(),
                    "co2Difference": metrics_result2.get('data', {}).get('metrics', {}).get('co2', 0) - 
                                    metrics_result1.get('data', {}).get('metrics', {}).get('co2', 0),
                    "travelTimeDifference": metrics_result2.get('data', {}).get('metrics', {}).get('travelTime', 0) - 
                                            metrics_result1.get('data', {}).get('metrics', {}).get('travelTime', 0),
                    "trafficCountDifference": metrics_result2.get('data', {}).get('metrics', {}).get('trafficCountTotal', 0) - 
                                             metrics_result1.get('data', {}).get('metrics', {}).get('trafficCountTotal', 0)
                }
            }
            return comparison
        except Exception as e:
            logger.error(f"Comparison error: {e}")
            return format_api_error(500, str(e))
  • The @mcp.tool() decorator registers compare_scenarios as an MCP tool on the FastMCP instance.
    @mcp.tool()
  • Helper used by compare_scenarios to format error responses.
    def format_api_error(status_code: int, error_detail: str) -> Dict[str, Any]:
        return {
            "success": False,
            "status_code": status_code,
            "error": error_detail
        }
  • The FujitsuSocialDigitalTwinClient.get_metrics method called by compare_scenarios to fetch metrics for each simulation.
    async def get_metrics(self, simulation_id: str) -> Dict[str, Any]:
        try:
            response = await self.client.get(f"/api/metrics/{simulation_id}")
            response.raise_for_status()
            return format_simulation_result(response.json())
        except httpx.HTTPStatusError as e:
            logger.error(f"Metrics retrieval error: {e}")
            return format_api_error(e.response.status_code, str(e))
        except Exception as e:
            logger.error(f"Unexpected error retrieving metrics: {e}")
            return format_api_error(500, str(e))
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations, the description bears full burden for behavioral disclosure. It implies a read-only analysis operation, which is adequate, but does not disclose whether it has side effects, performance implications, or authorization requirements beyond what can be inferred.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence of 18 words, front-loading the key action and outcome. Every word contributes to the purpose, with no repetition or fluff.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness2/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given 5 parameters, no output schema, and no annotations, the description is too brief. It does not explain the output format, interpretation of differences, constraints on input scenarios (e.g., same simulation or not), or error conditions, leaving significant gaps for agent understanding.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The description adds no meaning to the 5 parameters beyond what the input schema already provides (titles, types, defaults). Since schema description coverage is 0%, the description fails to compensate by explaining parameter roles, format, or relationships.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: performing comparative analysis between two simulation scenarios, and lists specific metrics (traffic flow, emissions, travel times). The verb 'compare' and resource 'simulation scenarios' are specific and distinguish it from siblings like 'analyze_traffic_simulation' which focuses on a single simulation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

No guidance is provided on when to use this tool versus alternatives like 'analyze_traffic_simulation' or 'get_simulation_result'. There are no prerequisites, exclusion criteria, or context about when comparison is appropriate.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/3a3/fujitsu-sdt-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server