Skip to main content
Glama
sniper35
by sniper35

check_spot_availability

Check availability of spot GPU instances across all locations using Verda Cloud's SDK. Specify GPU type and count to verify if resources are ready for deployment.

Instructions

Check if spot GPU instances are available.

Uses the official Verda SDK is_available() method to check across all locations.

Args: gpu_type: GPU type to check (default from config, e.g., "B300", "B200"). gpu_count: Number of GPUs (default from config, e.g., 1, 2, 4, 8).

Returns: Availability status with location if available.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
gpu_typeNo
gpu_countNo

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
resultYes

Implementation Reference

  • The MCP tool handler decorated with @mcp.tool() that handles the check_spot_availability tool invocation. It accepts gpu_type and gpu_count parameters, calls the client implementation, and returns a formatted markdown result indicating availability status.
    @mcp.tool()
    async def check_spot_availability(
        gpu_type: str | None = None,
        gpu_count: int | None = None,
    ) -> str:
        """Check if spot GPU instances are available.
    
        Uses the official Verda SDK is_available() method to check across all locations.
    
        Args:
            gpu_type: GPU type to check (default from config, e.g., "B300", "B200").
            gpu_count: Number of GPUs (default from config, e.g., 1, 2, 4, 8).
    
        Returns:
            Availability status with location if available.
        """
        client = _get_client()
        config = get_config()
    
        gpu_type = gpu_type or config.defaults.gpu_type
        gpu_count = gpu_count or config.defaults.gpu_count
    
        result = await client.check_spot_availability(gpu_type, gpu_count)
    
        instance_type = get_instance_type_from_gpu_type_and_count(gpu_type, gpu_count)
    
        lines = [
            "# Spot Availability Check",
            "",
            f"**GPU Type**: {gpu_type}",
            f"**GPU Count**: {gpu_count}",
            f"**Instance Type**: {instance_type or 'Unknown'}",
            "",
        ]
    
        if result.available:
            lines.append("## ✓ AVAILABLE")
            lines.append("")
            lines.append(f"**Location**: {result.location}")
            lines.append("")
            lines.append(
                "Ready to deploy! Use `deploy_spot_instance` to create an instance."
            )
        else:
            lines.append("## ✗ NOT AVAILABLE")
            lines.append("")
            lines.append(
                "No spot instances available across all locations "
                "(FIN-01, FIN-02, FIN-03)."
            )
            lines.append("")
            lines.append("Options:")
            lines.append("- Use `monitor_spot_availability` to wait for availability")
            lines.append("- Try a different GPU type or count")
    
        return "\n".join(lines)
  • The core implementation logic for checking spot availability. Iterates through location codes, uses the Verda SDK's is_available() method to check each location, and returns an AvailabilityResult with the first available location or a not-available result.
    async def check_spot_availability(
        self,
        gpu_type: str | None = None,
        gpu_count: int | None = None,
        location: str | None = None,
    ) -> AvailabilityResult:
        """Check if a spot instance is available.
    
        Args:
            gpu_type: GPU type (default from config).
            gpu_count: Number of GPUs (default from config).
            location: Specific location to check (default: check all).
    
        Returns:
            AvailabilityResult with status and location if available.
        """
        self._ensure_client()
    
        gpu_type = gpu_type or self.config.defaults.gpu_type
        gpu_count = gpu_count or self.config.defaults.gpu_count
        instance_type = get_instance_type_from_gpu_type_and_count(gpu_type, gpu_count)
    
        if not instance_type:
            logger.warning(f"Unknown instance type for {gpu_type} x{gpu_count}")
            return AvailabilityResult(
                available=False,
                location="",
                instance_type="",
                gpu_type=gpu_type,
                gpu_count=gpu_count,
            )
    
        locations_to_check = [location] if location else LOCATION_CODES
    
        for loc in locations_to_check:
            try:
                available = await self._run_sync(
                    self._instances.is_available,
                    instance_type,
                    True,  # is_spot
                    loc,
                )
                if available:
                    logger.info(f"Spot available: {instance_type} at {loc}")
                    return AvailabilityResult(
                        available=True,
                        location=loc,
                        instance_type=instance_type,
                        gpu_type=gpu_type,
                        gpu_count=gpu_count,
                    )
            except Exception as e:
                logger.debug(f"Error checking {loc}: {e}")
                continue
    
        return AvailabilityResult(
            available=False,
            location="",
            instance_type=instance_type,
            gpu_type=gpu_type,
            gpu_count=gpu_count,
        )
  • The AvailabilityResult dataclass that defines the schema for the availability check result, containing fields: available (bool), location (str), instance_type (str), gpu_type (str), and gpu_count (int).
    @dataclass
    class AvailabilityResult:
        """Result of an availability check."""
    
        available: bool
        location: str
        instance_type: str
        gpu_type: str
        gpu_count: int
  • The FastMCP server instance is created with mcp = FastMCP('verda-cloud'). The @mcp.tool() decorator on line 102 registers the check_spot_availability function as an MCP tool.
    # Initialize FastMCP server
    mcp = FastMCP("verda-cloud")
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

With no annotations provided, the description carries the full burden. It discloses that it 'checks across all locations' and uses a specific SDK method, which adds useful behavioral context. However, it doesn't mention rate limits, authentication requirements, whether this is a read-only operation (implied but not stated), or what happens when parameters are omitted (defaults from config). The description adds some value but leaves significant behavioral aspects unspecified.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is well-structured with clear sections: purpose statement, implementation detail, parameters, and return value. Each sentence earns its place by adding distinct information. At 5 sentences, it's appropriately sized for a tool with 2 parameters and important behavioral context. The front-loaded purpose statement immediately communicates the tool's function.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool's moderate complexity (2 parameters, no annotations, but with output schema), the description provides good coverage. It explains the purpose, parameters, and return value at a high level. Since an output schema exists, the description doesn't need to detail the return structure. The main gap is lack of explicit behavioral constraints (rate limits, auth), but overall it's reasonably complete for this context.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

With 0% schema description coverage, the description must compensate for the schema's lack of documentation. It successfully explains both parameters: 'gpu_type' is described as 'GPU type to check' with examples, and 'gpu_count' as 'Number of GPUs' with examples. The description also clarifies that defaults come from config when parameters are omitted. This adds meaningful semantic context beyond the bare schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose4/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's purpose: 'Check if spot GPU instances are available' with specific resource (spot GPU instances) and verb (check). It distinguishes from siblings like 'monitor_spot_availability' by focusing on a single availability check rather than ongoing monitoring. However, it doesn't explicitly contrast with 'deploy_spot_instance' which would be the logical next step after availability confirmation.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage context through the mention of 'spot GPU instances' and the SDK method, suggesting this is for checking availability before deployment. However, it doesn't provide explicit guidance on when to use this vs alternatives like 'monitor_spot_availability' (for continuous monitoring) or 'deploy_spot_instance' (for actual deployment). The context is clear but lacks explicit when/when-not statements.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/sniper35/verda-cloud-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server