llm_query
Route queries to language models based on task complexity. Automatically selects cost-effective models for simple tasks and high-performance models for complex analysis, optimizing for quality and budget.
Instructions
Send a general query to the best available LLM.
Routes by complexity: simple→Haiku/Flash, moderate→Sonnet/GPT-4o, complex→Opus/o3.
Args:
prompt: The question or prompt to send.
complexity: Task complexity — "simple", "moderate", or "complex". Drives model
selection: simple→cheap (Haiku/Flash), moderate→balanced (Sonnet/GPT-4o),
complex→premium (Opus/o3). Auto-detected from prompt length when omitted.
model: Explicit model override, bypasses complexity routing entirely.
system_prompt: Optional system instructions.
temperature: Sampling temperature (0.0-2.0).
max_tokens: Maximum output tokens.
context: Optional conversation context to help the model understand the broader task.Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | ||
| complexity | No | ||
| model | No | ||
| system_prompt | No | ||
| temperature | No | ||
| max_tokens | No | ||
| context | No |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |