Skip to main content
Glama

compare_models

Compare 2-5 AI models simultaneously by sending the same prompt to each. Get side-by-side results with latency and token usage metrics for performance evaluation.

Instructions

Query 2-5 models in parallel with the same prompt. Returns side-by-side comparison with latency and token metrics.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelsYesList of model IDs to compare (2-5 models)
promptYesThe prompt to send to all models
system_promptNoOptional system prompt for all models
formatNoResponse format — 'brief' for token-efficient summary, 'detailed' for full responsedetailed
temperatureNo
max_tokensNo

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Pickle-Pixel/HydraMCP'

If you have feedback or need assistance with the MCP directory API, please join our Discord server