Skip to main content
Glama

queue_status

Monitor the status of AI model queues to track loaded models, pending requests, and GPU memory usage for performance analysis and debugging.

Instructions

Get current status of the model queue system.

Shows loaded models, queued requests, and GPU memory usage. Useful for monitoring queue performance and debugging loading issues.

Returns: JSON with queue status, loaded models, and pending requests

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault

No arguments

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/zbrdc/delia'

If you have feedback or need assistance with the MCP directory API, please join our Discord server