check_spot_availability
Check availability of spot GPU instances across all locations using Verda Cloud's SDK. Specify GPU type and count to verify if resources are ready for deployment.
Instructions
Check if spot GPU instances are available.
Uses the official Verda SDK is_available() method to check across all locations.
Args: gpu_type: GPU type to check (default from config, e.g., "B300", "B200"). gpu_count: Number of GPUs (default from config, e.g., 1, 2, 4, 8).
Returns: Availability status with location if available.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| gpu_type | No | ||
| gpu_count | No |
Implementation Reference
- src/verda_mcp/server.py:102-157 (handler)The MCP tool handler decorated with @mcp.tool() that handles the check_spot_availability tool invocation. It accepts gpu_type and gpu_count parameters, calls the client implementation, and returns a formatted markdown result indicating availability status.
@mcp.tool() async def check_spot_availability( gpu_type: str | None = None, gpu_count: int | None = None, ) -> str: """Check if spot GPU instances are available. Uses the official Verda SDK is_available() method to check across all locations. Args: gpu_type: GPU type to check (default from config, e.g., "B300", "B200"). gpu_count: Number of GPUs (default from config, e.g., 1, 2, 4, 8). Returns: Availability status with location if available. """ client = _get_client() config = get_config() gpu_type = gpu_type or config.defaults.gpu_type gpu_count = gpu_count or config.defaults.gpu_count result = await client.check_spot_availability(gpu_type, gpu_count) instance_type = get_instance_type_from_gpu_type_and_count(gpu_type, gpu_count) lines = [ "# Spot Availability Check", "", f"**GPU Type**: {gpu_type}", f"**GPU Count**: {gpu_count}", f"**Instance Type**: {instance_type or 'Unknown'}", "", ] if result.available: lines.append("## ✓ AVAILABLE") lines.append("") lines.append(f"**Location**: {result.location}") lines.append("") lines.append( "Ready to deploy! Use `deploy_spot_instance` to create an instance." ) else: lines.append("## ✗ NOT AVAILABLE") lines.append("") lines.append( "No spot instances available across all locations " "(FIN-01, FIN-02, FIN-03)." ) lines.append("") lines.append("Options:") lines.append("- Use `monitor_spot_availability` to wait for availability") lines.append("- Try a different GPU type or count") return "\n".join(lines) - src/verda_mcp/client.py:230-291 (helper)The core implementation logic for checking spot availability. Iterates through location codes, uses the Verda SDK's is_available() method to check each location, and returns an AvailabilityResult with the first available location or a not-available result.
async def check_spot_availability( self, gpu_type: str | None = None, gpu_count: int | None = None, location: str | None = None, ) -> AvailabilityResult: """Check if a spot instance is available. Args: gpu_type: GPU type (default from config). gpu_count: Number of GPUs (default from config). location: Specific location to check (default: check all). Returns: AvailabilityResult with status and location if available. """ self._ensure_client() gpu_type = gpu_type or self.config.defaults.gpu_type gpu_count = gpu_count or self.config.defaults.gpu_count instance_type = get_instance_type_from_gpu_type_and_count(gpu_type, gpu_count) if not instance_type: logger.warning(f"Unknown instance type for {gpu_type} x{gpu_count}") return AvailabilityResult( available=False, location="", instance_type="", gpu_type=gpu_type, gpu_count=gpu_count, ) locations_to_check = [location] if location else LOCATION_CODES for loc in locations_to_check: try: available = await self._run_sync( self._instances.is_available, instance_type, True, # is_spot loc, ) if available: logger.info(f"Spot available: {instance_type} at {loc}") return AvailabilityResult( available=True, location=loc, instance_type=instance_type, gpu_type=gpu_type, gpu_count=gpu_count, ) except Exception as e: logger.debug(f"Error checking {loc}: {e}") continue return AvailabilityResult( available=False, location="", instance_type=instance_type, gpu_type=gpu_type, gpu_count=gpu_count, ) - src/verda_mcp/client.py:66-74 (schema)The AvailabilityResult dataclass that defines the schema for the availability check result, containing fields: available (bool), location (str), instance_type (str), gpu_type (str), and gpu_count (int).
@dataclass class AvailabilityResult: """Result of an availability check.""" available: bool location: str instance_type: str gpu_type: str gpu_count: int - src/verda_mcp/server.py:23-24 (registration)The FastMCP server instance is created with mcp = FastMCP('verda-cloud'). The @mcp.tool() decorator on line 102 registers the check_spot_availability function as an MCP tool.
# Initialize FastMCP server mcp = FastMCP("verda-cloud")