Skip to main content
Glama
milkymap

MCP4Modal Sandbox

by milkymap

launch_sandbox

Launch isolated Python environments with configurable dependencies, GPU support, and resource allocation for testing, ML workloads, and secure code execution.

Instructions

Launches a new Modal sandbox with specified configuration. Parameters: - python_version: Python version to use (default: "3.12") - pip_packages: List of pip packages to install - apt_packages: List of apt packages to install - timeout_seconds: Maximum runtime in seconds (default: 3600) - cpu: CPU cores allocated (default: 1.0) - memory: Memory in MB allocated (default: 1024) - secrets: Dictionary of environment variables to inject (creates new secret) - volumes: Dictionary of volumes to mount in sandbox, where the key is the path in the sandbox and the value is the name of the volume - workdir: Working directory in sandbox (default: "/") - gpu_type: Type of GPU to use (optional). Supported types: * T4: Entry-level GPU, good for inference * L4: Mid-range GPU, good for general ML tasks * A10G: High-performance GPU, good for training * A100-40GB: High-end GPU with 40GB memory * A100-80GB: High-end GPU with 80GB memory * L40S: Latest generation GPU, good for ML workloads * H100: Latest generation high-end GPU * H200: Latest generation flagship GPU * B200: Latest generation enterprise GPU - gpu_count: Number of GPUs to use (optional, default: 1) * A10G supports up to 4 GPUs * Other types support up to 8 GPUs Returns a SandboxLaunchResponse containing: - sandbox_id: Unique identifier for the sandbox - status: Current status of the sandbox - python_version: Python version installed - pip_packages: List of pip packages installed - apt_packages: List of apt packages installed - preloaded_secrets: List of predefined secrets injected from Modal dashboard This tool is useful for: - Creating isolated Python environments - Running code with specific dependencies - Testing in clean environments - Executing long-running tasks - Running GPU-accelerated workloads - Training machine learning models - Running inference on large models Secrets Management: - Use 'secrets' parameter to create new secrets with key-value pairs - Use 'inject_predefined_secrets' to reference existing secrets from Modal dashboard - Predefined secrets are applied after custom secrets, so they can override values - Access secrets as environment variables in your sandbox code using os.environ

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
python_versionNo3.12
pip_packagesNo
apt_packagesNo
timeout_secondsNo
cpuNo
memoryNo
secretsNo
volumesNo
workdirNo/home/solver
gpu_typeNo
gpu_countNo

Implementation Reference

  • The main handler function that implements the launch_sandbox tool. It builds a custom Modal Image with specified Python version, apt/pip packages, configures secrets, volumes, GPU, and creates a sandbox using modal.Sandbox.create.aio, returning a SandboxLaunchResponse.
    async def launch_sandbox( self, python_version: str = "3.12", pip_packages: List[str] = None, apt_packages: List[str] = None, timeout_seconds: int = 600, cpu: float = 2.0, memory: int = 4096, secrets: Dict[str, str] = None, volumes: Dict[str, str] = None, workdir: str = "/home/solver", gpu_type: Optional[GPUType] = None, gpu_count: Optional[int] = None, ) -> SandboxLaunchResponse: pip_packages = pip_packages or [] apt_packages = apt_packages or [] secrets = secrets or {} inject_predefined_secrets = self.preloaded_secrets or [] # Build the image with Python version and dependencies image = modal.Image.debian_slim(python_version=python_version) # Install system dependencies if apt_packages: image = image.apt_install(*apt_packages) # Install Python packages if pip_packages: image = image.pip_install(*pip_packages) # Create secrets for environment variables (the proper Modal way) modal_secrets = [] if secrets: secret = modal.Secret.from_dict(secrets) modal_secrets.append(secret) if inject_predefined_secrets: for secret_name in inject_predefined_secrets: secret = modal.Secret.from_name(secret_name) modal_secrets.append(secret) modal_volumes = {} if volumes: for volume_path, volume_name in volumes.items(): modal_volumes[volume_path] = modal.Volume.from_name(volume_name, create_if_missing=True) # Configure GPU if specified gpu = None if gpu_type: if gpu_count: gpu = f"{gpu_type.value}:{gpu_count}" else: gpu = gpu_type.value # Get or create Modal app for the specified namespace app = modal.App.lookup(self.app_name, create_if_missing=True) # Create sandbox with Modal with modal.enable_output(): logger.info(f"Creating sandbox with Python {python_version} in app '{self.app_name}'" + (f" and GPU {gpu}" if gpu else "")) sandbox = await modal.Sandbox.create.aio( "/bin/bash", image=image, app=app, timeout=timeout_seconds, cpu=cpu, memory=memory, secrets=modal_secrets, volumes=modal_volumes, workdir=workdir, gpu=gpu ) # Get the Modal-assigned ID sandbox_id = sandbox.object_id logger.info(f"Launched sandbox {sandbox_id} with Python {python_version}") return SandboxLaunchResponse( sandbox_id=sandbox_id, status="running", python_version=python_version, pip_packages=pip_packages, apt_packages=apt_packages, preloaded_secrets=inject_predefined_secrets, )
  • Pydantic model defining the output schema for the launch_sandbox tool response.
    class SandboxLaunchResponse(BaseModel): sandbox_id: str status: str python_version: str pip_packages: List[str] apt_packages: List[str] preloaded_secrets: List[str] = []
  • Detailed tool description used as schema/input spec, defining all parameters, defaults, GPU types, returns, and usage instructions for launch_sandbox.
    LAUNCH_SANDBOX = """ Launches a new Modal sandbox with specified configuration. Parameters: - python_version: Python version to use (default: "3.12") - pip_packages: List of pip packages to install - apt_packages: List of apt packages to install - timeout_seconds: Maximum runtime in seconds (default: 3600) - cpu: CPU cores allocated (default: 1.0) - memory: Memory in MB allocated (default: 1024) - secrets: Dictionary of environment variables to inject (creates new secret) - volumes: Dictionary of volumes to mount in sandbox, where the key is the path in the sandbox and the value is the name of the volume - workdir: Working directory in sandbox (default: "/") - gpu_type: Type of GPU to use (optional). Supported types: * T4: Entry-level GPU, good for inference * L4: Mid-range GPU, good for general ML tasks * A10G: High-performance GPU, good for training * A100-40GB: High-end GPU with 40GB memory * A100-80GB: High-end GPU with 80GB memory * L40S: Latest generation GPU, good for ML workloads * H100: Latest generation high-end GPU * H200: Latest generation flagship GPU * B200: Latest generation enterprise GPU - gpu_count: Number of GPUs to use (optional, default: 1) * A10G supports up to 4 GPUs * Other types support up to 8 GPUs Returns a SandboxLaunchResponse containing: - sandbox_id: Unique identifier for the sandbox - status: Current status of the sandbox - python_version: Python version installed - pip_packages: List of pip packages installed - apt_packages: List of apt packages installed - preloaded_secrets: List of predefined secrets injected from Modal dashboard This tool is useful for: - Creating isolated Python environments - Running code with specific dependencies - Testing in clean environments - Executing long-running tasks - Running GPU-accelerated workloads - Training machine learning models - Running inference on large models Secrets Management: - Use 'secrets' parameter to create new secrets with key-value pairs - Use 'inject_predefined_secrets' to reference existing secrets from Modal dashboard - Predefined secrets are applied after custom secrets, so they can override values - Access secrets as environment variables in your sandbox code using os.environ """
  • Registers the launch_sandbox tool with FastMCP, binding the handler method and description.
    mcp_app.tool( name="launch_sandbox", description=ToolDescriptions.LAUNCH_SANDBOX, )(self.launch_sandbox)
  • Enum defining supported GPU types used as type hint for gpu_type parameter in launch_sandbox.
    class GPUType(str, Enum): T4 = "T4" L4 = "L4" A10G = "A10G" A100_40GB = "A100-40GB" A100_80GB = "A100-80GB" L40S = "L40S" H100 = "H100" H200 = "H200" B200 = "B200"

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/milkymap/mcp4modal_sandbox'

If you have feedback or need assistance with the MCP directory API, please join our Discord server