llm_select_agent
Analyzes a task prompt and recommends the appropriate agent runtime and model based on complexity and budget profile for session-level routing.
Instructions
Classify a task prompt and return the recommended agent CLI + model for session-level routing.
Use this BEFORE starting a Claude Code / Codex / Gemini CLI session to pick the right
agent runtime for the task. This is session-level routing — it selects which agent to
invoke, not which model to call mid-session.
Decision tree (profile × complexity):
budget + simple/moderate → codex + gpt-4o-mini
budget + complex → codex + gpt-4o (Codex handles most coding; escalate if needed)
balanced + simple → codex + gpt-4o-mini
balanced + moderate → claude_code + sonnet
balanced + complex → claude_code + opus
premium + any → claude_code + opus
Returns JSON with:
primary — agent binary name: "claude_code" | "codex" | "gemini_cli"
primary_model — model flag value (pass via -m or --model)
fallback — fallback agent if primary unavailable
fallback_model — model for fallback
task_type — classified task type (code / analyze / generate / research / query)
complexity — simple | moderate | complex
confidence — classifier confidence 0–1
reason — one-line classification rationale
env_check — dict of required env vars and whether they're set
Args:
prompt: The task description to classify (same text you'd pass to the agent).
profile: Routing profile — "budget", "balanced", or "premium" (default: "balanced").Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | ||
| profile | No | balanced |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |