Skip to main content
Glama
egoughnour
by egoughnour

rlm_sub_query_batch

Process multiple data chunks in parallel to analyze large datasets beyond standard prompt limits, managing concurrency for efficient resource use.

Instructions

Process multiple chunks in parallel. Respects concurrency limit to manage system resources.

Args: query: Question/instruction for each sub-call context_name: Context identifier chunk_indices: List of chunk indices to process provider: LLM provider - 'auto', 'ollama', or 'claude-sdk' model: Model to use (provider-specific defaults apply) concurrency: Max parallel requests (default 4, max 8)

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
queryYes
context_nameYes
chunk_indicesYes
providerNoauto
modelNo
concurrencyNo

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/egoughnour/massive-context-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server