Skip to main content
Glama

models

List and manage configured AI models across GPU backends to check availability, verify configurations, and understand task-to-model routing logic.

Instructions

List all configured models across all GPU backends. Shows model tiers (quick/coder/moe) and which are currently loaded.

WHEN TO USE:

  • Check which models are available for tasks

  • Verify model configuration across backends

  • Understand task-to-model routing logic

Returns: JSON with: - backends: All configured backends with their models - currently_loaded: Models in GPU memory (no load time) - selection_logic: How tasks map to model tiers

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zbrdc/delia'

If you have feedback or need assistance with the MCP directory API, please join our Discord server