models
List and manage configured AI models across GPU backends to check availability, verify configurations, and understand task-to-model routing logic.
Instructions
List all configured models across all GPU backends. Shows model tiers (quick/coder/moe) and which are currently loaded.
WHEN TO USE:
Check which models are available for tasks
Verify model configuration across backends
Understand task-to-model routing logic
Returns: JSON with: - backends: All configured backends with their models - currently_loaded: Models in GPU memory (no load time) - selection_logic: How tasks map to model tiers
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
No arguments | |||