rlm_sub_query_batch
Process multiple data chunks in parallel to analyze large datasets beyond standard prompt limits, managing concurrency for efficient resource use.
Instructions
Process multiple chunks in parallel. Respects concurrency limit to manage system resources.
Args: query: Question/instruction for each sub-call context_name: Context identifier chunk_indices: List of chunk indices to process provider: LLM provider - 'auto', 'ollama', or 'claude-sdk' model: Model to use (provider-specific defaults apply) concurrency: Max parallel requests (default 4, max 8)
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| query | Yes | ||
| context_name | Yes | ||
| chunk_indices | Yes | ||
| provider | No | auto | |
| model | No | ||
| concurrency | No |