submit_job
Submit a batch job to a SLURM cluster with configurable compute resources, environment, and dependencies.
Instructions
Submit a SLURM job.
Args:
command: Shell command to execute (e.g. "python train.py --epochs 100")
name: Job name for identification in SLURM queue
nodes: Number of compute nodes to allocate
gpus_per_node: Number of GPUs per node (0 for CPU-only)
ntasks_per_node: Number of tasks per node
cpus_per_task: Number of CPUs per task
memory_per_node: Memory per node (e.g. "32GB", "64G")
time_limit: Wall time limit (e.g. "4:00:00", "1-00:00:00")
partition: SLURM partition name (e.g. "gpu", "cpu")
nodelist: Specific nodes to use (e.g. "node001,node002")
conda: Conda environment name to activate before running
venv: Path to Python virtual environment to activate
env_vars: Additional environment variables as key-value pairs
log_dir: Directory for stdout/stderr log files
work_dir: Working directory for the job (defaults to cwd)
use_ssh: If true, submit via SSH to remote SLURM clusterInput Schema
| Name | Required | Description | Default |
|---|---|---|---|
| command | Yes | ||
| name | No | job | |
| nodes | No | ||
| gpus_per_node | No | ||
| ntasks_per_node | No | ||
| cpus_per_task | No | ||
| memory_per_node | No | ||
| time_limit | No | ||
| partition | No | ||
| nodelist | No | ||
| conda | No | ||
| venv | No | ||
| env_vars | No | ||
| log_dir | No | logs | |
| work_dir | No | ||
| use_ssh | No |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||