llm_auto
Routes AI tasks to the optimal model across 20+ providers, selecting by task type and budget. Tracks cumulative savings persistently in SQLite, even from hosts without client-side hooks.
Instructions
Auto-routing wrapper with persistent savings tracking — works from any host.
Equivalent to llm_route but additionally:
- Flushes pending hook-written savings records into SQLite before routing.
- Appends a compact savings envelope every 5 calls so you can see the
cumulative value across all sessions and hosts without running llm_savings.
Use llm_auto instead of llm_route when you are in a host that lacks a
UserPromptSubmit hook (Codex CLI, Claude Desktop, GitHub Copilot) — the
savings are tracked server-side, so they accumulate correctly regardless of
which client triggered the call.
Args:
prompt: The task or question to route.
task_type: Optional hint — "query", "research", "generate", "analyze", "code".
profile_override: Force a routing profile — "budget", "balanced", or "premium".
system_prompt: Optional system instructions.
context: Optional conversation context.Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| prompt | Yes | ||
| task_type | No | ||
| profile_override | No | ||
| system_prompt | No | ||
| context | No |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |