Skip to main content
Glama

run_simulation

Queue high-fidelity physics simulations for disaster scenarios using digital twin technology. Monitor progress via subscription and retrieve results from NeoFS upon completion.

Instructions

Trigger a Digital Twin physics simulation for disaster scenario modeling.

Queues a high-fidelity simulation job and returns immediately with a job ID. Clients should subscribe to the simulation resource URI for real-time progress updates and result notification.

Workflow: 1. Validate simulation request parameters 2. Generate unique simulation ID 3. Queue job to DTSOP backend (Unity/Unreal Engine) 4. Store job metadata in simulation registry 5. Return simulation ID and subscription URI 6. Background processor updates status → processing → completed 7. Client fetches results from NeoFS when completed

Args: request: SimulationRequest with: - scenario_id: Unique scenario identifier - sector_id: Geographic sector to simulate - disaster_type: Physics model (flood/wildfire/earthquake) - parameters: Scenario params (wind_speed, water_level, etc.) - priority: "standard" or "urgent" ctx: Optional FastMCP context for logging.

Returns: str: Message with simulation ID and subscription instructions: "Simulation queued with ID: SIM-XXXXXXXX. Subscribe to resq://simulations/SIM-XXXXXXXX for updates."

Example: >>> from resq_mcp.models import SimulationRequest >>> request = SimulationRequest( ... scenario_id="flood-001", ... sector_id="Sector-1", ... disaster_type="flood", ... parameters={"water_level": 2.5}, ... priority="urgent" ... ) >>> result = await run_simulation(request) >>> print(result) # "Simulation queued with ID: SIM-ABCD1234..."

Integration: Production would: - Validate request against simulation templates - Check cluster capacity and queue position - Store job in Redis with priority - Submit to Unity/Unreal Engine processing cluster - Return estimated completion time

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
requestYesRequest for high-fidelity physics simulation in digital twin. Part of DTSOP system. Triggers physics-based simulation in Unity/Unreal Engine for accurate disaster propagation modeling and strategy validation. Attributes: scenario_id: Unique scenario identifier for this simulation. sector_id: Geographic sector to simulate. disaster_type: Type of disaster to model (e.g., "flood", "wildfire"). parameters: Simulation parameters (e.g., {"wind_speed": 15.5, "water_level": 2.3}). priority: Processing priority (standard queued, urgent fast-tracked). Note: Simulations run asynchronously. Monitor progress via the returned simulation ID and resource subscription (resq://simulations/{id}).

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes
Behavior5/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden of behavioral disclosure. It thoroughly explains the tool's behavior: it queues jobs asynchronously, returns a job ID immediately, requires clients to subscribe for updates, and outlines a detailed workflow from validation to result fetching. This covers critical aspects like async processing, job tracking, and result retrieval.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness3/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with sections like 'Workflow,' 'Args,' 'Returns,' 'Example,' and 'Integration,' but it is overly detailed and lengthy. Some sections, such as the extensive 'Integration' details, may be unnecessary for basic tool understanding, reducing conciseness despite good organization.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness5/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the complexity of an async simulation tool with no annotations, the description is highly complete. It explains the purpose, usage, behavior, parameters, return values, and provides an example. With an output schema present, it doesn't need to detail return values extensively, and it adequately covers all necessary contextual aspects for effective tool invocation.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, so the schema already documents the single parameter 'request' and its nested properties. The description adds some context by listing the attributes of SimulationRequest and providing an example, but does not significantly enhance the semantic understanding beyond what the schema provides, aligning with the baseline for high coverage.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Trigger a Digital Twin physics simulation for disaster scenario modeling.' It specifies the verb 'trigger' and the resource 'simulation job,' distinguishing it from sibling tools like 'get_deployment_strategy' and 'validate_incident' by focusing on execution rather than retrieval or validation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines4/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description provides clear context on when to use this tool: for queuing high-fidelity simulation jobs that run asynchronously. It mentions monitoring progress via subscription, but does not explicitly state when not to use it or compare it to alternatives like the sibling tools, which could help differentiate further.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/resq-software/pypi'

If you have feedback or need assistance with the MCP directory API, please join our Discord server