batch
Execute multiple AI tasks in parallel across all available GPUs to process files, review code, summarize documents, and analyze content simultaneously for maximum throughput.
Instructions
Execute multiple tasks in PARALLEL across all available GPUs for maximum throughput. Distributes work across local and remote backends intelligently.
WHEN TO USE:
Processing multiple files/documents simultaneously
Bulk code review, summarization, or analysis
Any workload that can be parallelized
Args: tasks: JSON string containing an array of task objects. Each object can have: - task: "quick"|"summarize"|"generate"|"review"|"analyze"|"plan"|"critique" - content: The content to process (required) - file: Optional file path - model: Force tier - "quick"|"coder"|"moe" - language: Language hint for code tasks
ROUTING LOGIC:
Distributes tasks across ALL available GPUs (local + remote)
Large content (>32K tokens) → Routes to backend with sufficient context
Normal content → Round-robin for parallel execution
Respects backend health and circuit breakers
Returns: Combined results from all tasks with timing and routing info
Example: batch('[ {"task": "summarize", "content": "doc1..."}, {"task": "review", "content": "code2...", "language": "python"}, {"task": "analyze", "content": "log3..."} ]')
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| tasks | Yes |