Skip to main content
Glama
Jungle-Grid

jungle-grid-mcp-server

Official

submit_job

Submit a GPU workload and receive an immediate job ID. Supports environment variables and asynchronous execution for inference, training, fine-tuning, or batch jobs.

Instructions

Submit a GPU workload to Jungle Grid. Returns a job_id immediately — the job runs asynchronously. Supports environment variables for runtime configuration or env-backed code payloads when the command would be too long. After submitting, prefer stream_job_logs for real-time output, then use get_job or get_job_logs for final status and logs. Managed jobs automatically upload regular files written under /workspace/artifacts as Jungle Grid artifacts. Use estimate_job first if you want a cost estimate before committing.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
nameNoOptional readable job name. A name is generated if omitted.
workload_typeYesType of GPU workload.
imageYesDocker image to run (e.g. 'pytorch/pytorch:2.2.0-cuda12.1-cudnn8-runtime').
commandYesContainer entrypoint arguments (e.g. ['python', 'train.py', '--epochs', '10']).
model_size_gbNoApproximate model size in GB. Used to select the right GPU tier for inference jobs.
disk_gbNoOptional managed-provider local disk override in GB. Leave unset to let Jungle Grid auto-size from model_size_gb.
optimize_forNoScheduling optimization goal. 'speed' prioritises latency; 'cost' minimises spend.
latency_priorityNoLatency sensitivity. Use 'high' for real-time inference.
cost_priorityNoCost sensitivity.
gpu_typeNoOptional exact GPU override.
gpu_classNoOptional soft GPU class preference.
region_preferenceNoOptional preferred region such as us-east or eu-west.
region_modeNoRegion preference mode.
environmentNoEnvironment variables injected into the container. Use this for large inline scripts such as CODE when you want to keep the command array short.
huggingface_credential_idNoOptional saved Hugging Face credential to inject into the managed runtime. Falls back to your account default when omitted.
webhook_urlNoOptional HTTPS URL to receive signed lifecycle event callbacks.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations provided, so description carries full burden. It discloses asynchronous execution, immediate return, environment variable support, and artifact upload behavior. Some missing details like cancellation or error handling.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness4/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Six sentences, front-loaded with core purpose and return type. Some redundancy in environment variable mention, but overall efficient and well-organized.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

With 16 parameters and no output schema, description explains post-submission steps and artifact upload. Missing error handling, but sufficient for submission tool with good sibling coverage.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters4/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 100%, but description adds extra context: environment variables for large scripts, model_size_gb for GPU tier selection, and disk_gb auto-sizing. Provides meaning beyond schema.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Description clearly states the tool submits a GPU workload to Jungle Grid, returns a job_id immediately, and runs asynchronously. It distinguishes from siblings like estimate_job, stream_job_logs, and get_job by specifying the process flow.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly recommends using estimate_job first for cost estimation, and after submission, suggests stream_job_logs for real-time output then get_job or get_job_logs for final status. Also explains when to use environment variables (for large commands).

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Jungle-Grid/mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server