Skip to main content
Glama

Server Configuration

Describes the environment variables required to run the server.

NameRequiredDescriptionDefault
RESQ_API_KEYNoBearer token for authresq-dev-token
RESQ_SAFE_MODENoDisable side-effectsTrue

Capabilities

Features and capabilities supported by this server

CapabilityDetails
tools
{
  "listChanged": true
}
prompts
{
  "listChanged": false
}
resources
{
  "subscribe": false,
  "listChanged": false
}
experimental
{}

Tools

Functions exposed to the LLM to take actions

NameDescription
run_simulationA

Trigger a Digital Twin physics simulation for disaster scenario modeling.

Queues a high-fidelity simulation job and returns immediately with a job ID. Clients should subscribe to the simulation resource URI for real-time progress updates and result notification.

Workflow: 1. Validate simulation request parameters 2. Generate unique simulation ID 3. Queue job to DTSOP backend (Unity/Unreal Engine) 4. Store job metadata in simulation registry 5. Return simulation ID and subscription URI 6. Background processor updates status → processing → completed 7. Client fetches results from NeoFS when completed

Args: request: SimulationRequest with: - scenario_id: Unique scenario identifier - sector_id: Geographic sector to simulate - disaster_type: Physics model (flood/wildfire/earthquake) - parameters: Scenario params (wind_speed, water_level, etc.) - priority: "standard" or "urgent" ctx: Optional FastMCP context for logging.

Returns: str: Message with simulation ID and subscription instructions: "Simulation queued with ID: SIM-XXXXXXXX. Subscribe to resq://simulations/SIM-XXXXXXXX for updates."

Example: >>> from resq_mcp.models import SimulationRequest >>> request = SimulationRequest( ... scenario_id="flood-001", ... sector_id="Sector-1", ... disaster_type="flood", ... parameters={"water_level": 2.5}, ... priority="urgent" ... ) >>> result = await run_simulation(request) >>> print(result) # "Simulation queued with ID: SIM-ABCD1234..."

Integration: Production would: - Validate request against simulation templates - Check cluster capacity and queue position - Store job in Redis with priority - Submit to Unity/Unreal Engine processing cluster - Return estimated completion time

get_deployment_strategyA

Generate an RL-optimized drone deployment and evacuation strategy.

Uses reinforcement learning models trained on thousands of simulated disasters to recommend optimal resource allocation, routing, and risk parameters for a specific incident or pre-alert.

Args: incident_id: Incident identifier (INC-XXX) or pre-alert ID (PRE-XXX) to generate strategy for.

Returns: OptimizationStrategy: Complete strategy recommendation with: - strategy_id: Unique identifier - related_alert_id: Original incident/alert ID - recommended_deployment: Drone type counts - evacuation_routes: Prioritized route list - estimated_success_rate: Predicted success (0.0-1.0) - simulation_proof_url: NeoFS evidence link

Example: >>> strategy = await get_deployment_strategy("PRE-ABC123") >>> print(strategy.strategy_id) >>> print(strategy.recommended_deployment) # {"surveillance": 2, ...} >>> print(f"Success rate: {strategy.estimated_success_rate:.0%}")

Use Cases: - Pre-positioning drones before predicted disasters (PDIE alerts) - Active response optimization for confirmed incidents - Multi-objective optimization (speed, safety, resource efficiency) - Scenario comparison and sensitivity analysis

Integration: Strategy linked to blockchain for immutable audit trail. After approval, use update_mission_params to push to drones.

validate_incidentA

Submit validation result for an incident report.

Used by human operators or automated validation systems (HCE) to confirm or reject incident reports before triggering full response.

Args: val: IncidentValidation with: - incident_id: ID of incident being validated - is_confirmed: True=confirmed, False=rejected/false positive - validation_source: Who/what validated (e.g., "Human-Operator") - correlated_pre_alert_id: Optional linked PDIE alert - notes: Validation reasoning and evidence

Returns: str: Confirmation message indicating action taken: "Incident {id} successfully CONFIRMED." or "Incident {id} successfully REJECTED."

Example: >>> from resq_mcp.models import IncidentValidation >>> validation = IncidentValidation( ... incident_id="INC-123", ... is_confirmed=True, ... validation_source="Human-Operator-Alice", ... notes="Confirmed via video evidence and ground reports" ... ) >>> result = await validate_incident(validation) >>> print(result) # "Incident INC-123 successfully CONFIRMED."

Workflow: 1. Edge AI detects incident (low confidence) 2. HCE cross-references with PDIE/sensors 3. If ambiguous → human review required 4. Operator submits validation via this tool 5. If confirmed → trigger response strategy 6. If rejected → log as false positive, update ML model

Audit Trail: All validations logged with timestamp, source, and reasoning for post-incident analysis and ML model refinement.

Prompts

Interactive templates invoked by user choice

NameDescription
incident_response_planGenerate a structured prompt template for incident response planning. Provides a framework for AI agents or human operators to systematically analyze incidents and develop comprehensive response plans using available MCP tools and resources. Template Sections: 1. Situation Summary: Analyze current state and severity 2. Asset Allocation: Review and assign available resources 3. Risk Assessment: Evaluate hazards and constraints Args: incident_id: The incident identifier to analyze (e.g., "INC-123"). Returns: str: Formatted prompt template with: - Analysis instructions - Tool references (get_deployment_strategy, resq://drones/active) - Expected output format Example: >>> prompt = incident_response_plan("INC-456") >>> # Use with LLM: >>> response = llm.complete(prompt) >>> # LLM will call tools and produce structured response Use Cases: - AI-assisted crisis coordination (Spoon OS agent) - Human operator decision support - Training scenario generation - Post-incident plan review Integration: Prompt references MCP tools and resources that the LLM can call: - get_deployment_strategy(incident_id) → OptimizationStrategy - resq://drones/active → Fleet status - Additional sector/swarm status tools as needed

Resources

Contextual data attached and managed by the client

NameDescription
list_active_dronesList currently deployed drones in the active fleet. Resource endpoint providing real-time fleet status for operator awareness. Shows current deployment locations, battery levels, and operational modes. URI Pattern: resq://drones/active Returns: str: Formatted string with active drone details: - Drone identifier - Drone type/capability (Surveillance/Payload/Relay) - Operational status (ACTIVE/RETURNING/CHARGING) - Battery percentage - Current sector assignment Example Response: [Active Fleet Status] - DRONE-Alpha (Surveillance): ACTIVE | Battery 78% | Sector 4 - DRONE-Beta (Payload): RETURNING | Battery 12% | Sector 2 - DRONE-Gamma (Relay): ACTIVE | Battery 92% | Sector 4 Use Cases: - Operator dashboard fleet overview - Resource availability checking before deployment - Low battery alert monitoring - Sector coverage assessment Note: Current implementation returns static mock data. Production would query live telemetry from MCP drone feed server and aggregate real-time positions, battery, and mission status.

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/resq-software/pypi'

If you have feedback or need assistance with the MCP directory API, please join our Discord server