llm_cache_stats
View prompt classification cache statistics showing hit rates, entry counts, and memory usage to track LLM routing efficiency.
Instructions
Show prompt classification cache statistics — hit rate, entries, memory usage.
The cache stores ClassificationResult objects keyed by SHA-256(prompt + quality_mode + min_model).
Budget pressure is always applied fresh, so cached classifications stay valid.Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| result | Yes |