Skip to main content
Glama
3a3

Fujitsu Social Digital Twin MCP Server

by 3a3

list_simdata

Retrieve the complete list of all simulation datasets available in the system to use as inputs for running new simulations.

Instructions

Returns a complete list of all simulation datasets available in the system, which can be used as inputs for running new simulations.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
ctxNo

Implementation Reference

  • The main tool handler function for 'list_simdata' — decorated with @mcp.tool(), it calls api_client.get_simdata_list() and returns the result.
    @mcp.tool()
    async def list_simdata(ctx: Optional[Context] = None) -> Dict[str, Any]:
        """Returns a complete list of all simulation datasets available in the system, 
        which can be used as inputs for running new simulations."""
        async with await get_http_client() as client:
            api_client = FujitsuSocialDigitalTwinClient(client)
            result = await api_client.get_simdata_list()
        return result
  • The FujitsuSocialDigitalTwinClient.get_simdata_list() helper method that makes the actual HTTP GET /api/simdata call and wraps the response.
    async def get_simdata_list(self) -> Dict[str, Any]:
        try:
            response = await self.client.get("/api/simdata")
            response.raise_for_status()
            return format_simulation_result(response.json())
        except httpx.HTTPStatusError as e:
            logger.error(f"Simulation data list retrieval error: {e}")
            return format_api_error(e.response.status_code, str(e))
        except Exception as e:
            logger.error(f"Unexpected error retrieving simulation data list: {e}")
            return format_api_error(500, str(e))
  • The @mcp.tool() decorator registers 'list_simdata' as an MCP tool (same line as the handler definition).
    @mcp.tool()
    async def list_simdata(ctx: Optional[Context] = None) -> Dict[str, Any]:
  • The format_simulation_result() helper wraps API responses into a standard success/data format.
    def format_simulation_result(result: Dict[str, Any]) -> Dict[str, Any]:
        return {
            "success": True,
            "data": result
Behavior2/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries full burden for behavioral disclosure. It states the tool returns data but does not specify if it is read-only, requires special permissions, or has any side effects. Limited transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is a single sentence that efficiently conveys the core functionality. It is concise without being overly terse, though it could be slightly more structured.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness3/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

For a simple list tool, the description adequately states the return value. However, given the number of sibling tools, more context on what distinguishes this from similar list operations would improve completeness.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters1/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

The only parameter ('ctx') is optional and has no description in the schema (coverage 0%). The tool description does not mention or explain this parameter, leaving the agent without guidance on how to use it.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the action ('Returns a complete list') and the resource ('simulation datasets'), and the phrase 'available in the system' distinguishes it from sibling tools like list_simulations which list simulations, not datasets. It is specific and unambiguous.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines2/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides no guidance on when to use this tool versus its siblings (e.g., list_simulations, get_simdata). It does not mention any prerequisites or conditions for usage.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/3a3/fujitsu-sdt-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server