Skip to main content
Glama
atriumn
by atriumn

get_model_details

Look up pricing, context window, and capabilities for any LLM model. Fuzzy matching identifies models even without exact names.

Instructions

Look up pricing, context window, and capabilities for an LLM model. Uses fuzzy matching so you don't need the exact model key.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
model_nameYesModel name to look up (e.g. 'claude-sonnet-4-5', 'gpt-4o', 'gemini-2.0-flash')

Implementation Reference

  • src/tools.ts:6-21 (registration)
    Registration of the 'get_model_details' tool in the tools array, defining its name, description, and input schema (requires model_name string).
    {
      name: "get_model_details",
      description:
        "Look up pricing, context window, and capabilities for an LLM model. Uses fuzzy matching so you don't need the exact model key.",
      inputSchema: {
        type: "object" as const,
        properties: {
          model_name: {
            type: "string",
            description:
              "Model name to look up (e.g. 'claude-sonnet-4-5', 'gpt-4o', 'gemini-2.0-flash')",
          },
        },
        required: ["model_name"],
      },
    },
  • Zod validation schema for get_model_details, requiring a non-empty model_name string.
    const getModelDetailsSchema = z.object({
      model_name: z.string().min(1),
    });
  • Handler function for the get_model_details tool: parses args, fetches models, fuzzy-matches the model name, and returns formatted model details (pricing, context, capabilities).
    case "get_model_details": {
      const { model_name } = getModelDetailsSchema.parse(args);
      const models = await getModels();
      const { entry: model, isFineTuned } = fuzzyMatchWithMetadata(model_name, models);
    
      if (!model) {
        return {
          content: [
            {
              type: "text",
              text: `No model found matching "${model_name}". Try a different name or use compare_models to browse available models.`,
            },
          ],
        };
      }
    
      const details = formatModelDetails(model);
      const note = isFineTuned
        ? `\n⚠️ Note: This is pricing for the base model (${model.key}). Fine-tuned models use the same pricing as their base model.`
        : "";
    
      return {
        content: [{ type: "text", text: details + note }],
      };
    }
  • Helper function that formats a ModelEntry into a human-readable string with pricing (including tiered and prompt caching), context window, and capabilities details.
    function formatModelDetails(model: ModelEntry): string {
      const capabilities: string[] = [];
      if (model.supports_vision) capabilities.push("vision");
      if (model.supports_function_calling) capabilities.push("function_calling");
      if (model.supports_parallel_function_calling) capabilities.push("parallel_function_calling");
    
      const hasTieredInput = model.input_cost_per_million_above_200k != null;
      const hasTieredOutput = model.output_cost_per_million_above_200k != null;
      const hasTiered = hasTieredInput || hasTieredOutput;
    
      const tieredLines: string[] = [];
      if (hasTiered) {
        tieredLines.push(``);
        tieredLines.push(
          `Tiered Pricing (above ${formatTokenCount(TIERED_PRICING_THRESHOLD)} tokens, per 1M):`,
        );
        if (hasTieredInput) {
          tieredLines.push(`  Input:  ${formatCost(model.input_cost_per_million_above_200k ?? 0)}`);
        }
        if (hasTieredOutput) {
          tieredLines.push(`  Output: ${formatCost(model.output_cost_per_million_above_200k ?? 0)}`);
        }
      }
    
      const cachingLines: string[] = [];
      if (model.cache_read_input_token_cost_per_million != null) {
        cachingLines.push(``);
        cachingLines.push(`Prompt Caching:`);
        cachingLines.push(
          `  Cached input: ${formatCost(model.cache_read_input_token_cost_per_million)} / 1M tokens`,
        );
      }
    
      return [
        `Model: ${model.key}`,
        `Provider: ${model.litellm_provider}`,
        `Mode: ${model.mode}`,
        ``,
        `Pricing (per 1M tokens):`,
        `  Input:  ${formatCost(model.input_cost_per_million)}`,
        `  Output: ${formatCost(model.output_cost_per_million)}`,
        ...tieredLines,
        ...cachingLines,
        ``,
        `Context Window:`,
        `  Max Input:  ${formatTokenCount(model.max_input_tokens)}`,
        `  Max Output: ${formatTokenCount(model.max_output_tokens)}`,
        ...(model.max_tokens !== null ? [`  Max Tokens: ${formatTokenCount(model.max_tokens)}`] : []),
        ``,
        `Capabilities: ${capabilities.length > 0 ? capabilities.join(", ") : "none listed"}`,
      ].join("\n");
    }
Behavior3/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

No annotations are provided, so the description carries the full burden. It adds value by disclosing fuzzy matching behavior, but does not mention failure cases, authentication needs, or any side effects. More detail would improve transparency.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

The description is extremely concise: two sentences with no wasted words. The primary function is stated in the first sentence, and the second adds a key behavioral detail. Every word earns its place.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Given the tool has only one parameter and no output schema, the description is fairly complete. It covers the main purpose, key behavior (fuzzy matching), and the parameter is well-documented. It could mention the return format, but overall adequate.

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema coverage is 100% for the single parameter, and the schema already describes it well with examples. The description does not add additional semantics beyond the schema, so baseline score of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

The description clearly states the tool's function: looking up pricing, context window, and capabilities for an LLM model. It distinguishes itself from siblings by specifying unique features like fuzzy matching, and the verb 'look up' plus resource 'LLM model details' is specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines3/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

The description implies usage when you need model details without an exact name, but it does not explicitly state when to use versus alternatives like 'compare_models' or 'refresh_prices'. No when-not or exclusion criteria are provided.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/atriumn/tokencost-dev'

If you have feedback or need assistance with the MCP directory API, please join our Discord server