Skip to main content
Glama

get_model_info

Retrieve detailed LLM/VLM model information including pricing, benchmarks, capabilities, and API code examples to support informed model selection decisions.

Instructions

Get detailed information about a specific LLM/VLM model: pricing, benchmarks, capabilities, and ready-to-use API code example. Returns structured Markdown (~300 tokens).

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelYesModel ID or partial name (e.g., "anthropic/claude-sonnet-4.6", "gpt-5.1", "gemini")
include_api_exampleNoInclude API usage code example (default: true)
api_formatNoAPI example format (default: openai_sdk)

Implementation Reference

  • The main handler function that executes get_model_info tool logic. It accepts model ID (with fuzzy matching), optional API example parameters, queries the ModelRegistry, and returns formatted model details with pricing, benchmarks, capabilities, and API usage examples.
    async ({ model, include_api_example, api_format }) => {
      await registry.ensureLoaded();
    
      const found = registry.getModel(model);
    
      if (!found) {
        const similar = registry.findSimilar(model);
        return {
          content: [
            {
              type: "text" as const,
              text: `Model "${model}" not found.${similar.length > 0 ? ` Did you mean: ${similar.join(", ")}?` : ""}`,
            },
          ],
          isError: true,
        };
      }
    
      const fetchedAt = registry.getCacheFreshnessMs();
      let output = formatModelDetail(found, fetchedAt);
    
      // Add API example
      if (include_api_example !== false) {
        const format = api_format ?? "openai_sdk";
        const example = getApiExample(format, found.id);
        if (example) {
          output += `\n\n### API Example (${format})\n\`\`\`${example.language}\n${example.code}\n\`\`\``;
        }
      }
    
      return {
        content: [{ type: "text" as const, text: output }],
      };
    }
  • Zod schema definition for get_model_info tool inputs. Defines three parameters: model (required string with partial name matching), include_api_example (optional boolean, default true), and api_format (optional enum: openai_sdk/curl/python_requests).
      model: z
        .string()
        .describe(
          'Model ID or partial name (e.g., "anthropic/claude-sonnet-4.6", "gpt-5.1", "gemini")'
        ),
      include_api_example: z
        .boolean()
        .optional()
        .describe("Include API usage code example (default: true)"),
      api_format: z
        .enum(["openai_sdk", "curl", "python_requests"])
        .optional()
        .describe("API example format (default: openai_sdk)"),
    },
  • Registers the get_model_info tool with the MCP server. Includes tool name, description, input schema, and the async handler function that processes requests and returns Markdown-formatted model information.
    export function registerModelInfoTool(
      server: McpServer,
      registry: ModelRegistry
    ): void {
      server.tool(
        "get_model_info",
        "Get detailed information about a specific LLM/VLM model: pricing, benchmarks, capabilities, " +
          "and ready-to-use API code example. Returns structured Markdown (~300 tokens).",
        {
          model: z
            .string()
            .describe(
              'Model ID or partial name (e.g., "anthropic/claude-sonnet-4.6", "gpt-5.1", "gemini")'
            ),
          include_api_example: z
            .boolean()
            .optional()
            .describe("Include API usage code example (default: true)"),
          api_format: z
            .enum(["openai_sdk", "curl", "python_requests"])
            .optional()
            .describe("API example format (default: openai_sdk)"),
        },
        async ({ model, include_api_example, api_format }) => {
          await registry.ensureLoaded();
    
          const found = registry.getModel(model);
    
          if (!found) {
            const similar = registry.findSimilar(model);
            return {
              content: [
                {
                  type: "text" as const,
                  text: `Model "${model}" not found.${similar.length > 0 ? ` Did you mean: ${similar.join(", ")}?` : ""}`,
                },
              ],
              isError: true,
            };
          }
    
          const fetchedAt = registry.getCacheFreshnessMs();
          let output = formatModelDetail(found, fetchedAt);
    
          // Add API example
          if (include_api_example !== false) {
            const format = api_format ?? "openai_sdk";
            const example = getApiExample(format, found.id);
            if (example) {
              output += `\n\n### API Example (${format})\n\`\`\`${example.language}\n${example.code}\n\`\`\``;
            }
          }
    
          return {
            content: [{ type: "text" as const, text: output }],
          };
        }
      );
    }
  • Helper function that formats a single model's data into structured Markdown. Outputs provider info, modality, pricing table, benchmarks table, percentile ranks, and capabilities. Used by get_model_info handler to generate the response text.
    export function formatModelDetail(model: UnifiedModel, fetchedAt?: number): string {
      const lines: string[] = [];
    
      lines.push(`## ${model.id}`);
      lines.push("");
      const metaParts = [
        `**Provider**: ${model.metadata.provider}`,
        `**Modality**: ${fmtModalities(model.capabilities.inputModalities)}→${fmtModalities(model.capabilities.outputModalities)}`,
      ];
      if (model.metadata.releaseDate) {
        metaParts.push(`**Released**: ${model.metadata.releaseDate}`);
      }
      if (model.metadata.isOpenSource) {
        metaParts.push("**Open Source**");
      }
      lines.push(metaParts.join(" | "));
    
      // Pricing
      lines.push("");
      lines.push("### Pricing");
      lines.push("| Metric | Value |");
      lines.push("|--------|-------|");
      lines.push(`| Input | ${fmtPrice(model.pricing.input)} /1M tok |`);
      lines.push(`| Output | ${fmtPrice(model.pricing.output)} /1M tok |`);
      if (model.pricing.cacheRead !== undefined) {
        lines.push(`| Cache Read | ${fmtPrice(model.pricing.cacheRead)} /1M tok |`);
      }
      if (model.pricing.reasoning !== undefined) {
        lines.push(`| Reasoning | ${fmtPrice(model.pricing.reasoning)} /1M tok |`);
      }
      lines.push(`| Context | ${fmtContext(model.capabilities.contextLength)} |`);
      if (model.capabilities.maxOutputTokens) {
        lines.push(`| Max Output | ${fmtContext(model.capabilities.maxOutputTokens)} |`);
      }
    
      // Benchmarks (only non-null)
      const benchEntries = Object.entries(model.benchmarks).filter(
        ([, v]) => v !== undefined && v !== null
      );
      if (benchEntries.length > 0) {
        lines.push("");
        lines.push("### Benchmarks");
        lines.push("| Benchmark | Score |");
        lines.push("|-----------|-------|");
        for (const [key, value] of benchEntries) {
          const label = benchmarkLabel(key);
          const formatted = key === "arenaElo" ? fmtElo(value as number) : fmtScore(value as number);
          lines.push(`| ${label} | ${formatted} |`);
        }
      }
    
      // Percentile ranks (only non-null)
      const percEntries = Object.entries(model.percentiles).filter(
        ([, v]) => v !== undefined && v !== null
      );
      if (percEntries.length > 0) {
        lines.push("");
        lines.push("### Percentile Ranks");
        lines.push("| Category | Percentile |");
        lines.push("|----------|------------|");
        for (const [key, value] of percEntries) {
          lines.push(`| ${percentileLabel(key)} | P${value} |`);
        }
      }
    
      // Capabilities
      const caps: string[] = [];
      if (model.capabilities.supportsTools) caps.push("Tools");
      if (model.capabilities.supportsReasoning) caps.push("Reasoning");
      if (model.capabilities.inputModalities.includes("image")) caps.push("Vision");
      if (caps.length > 0) {
        lines.push("");
        lines.push(`**Capabilities**: ${caps.join(", ")}`);
      }
    
      lines.push(freshnessFooter(fetchedAt));
    
      return lines.join("\n");
    }
  • Helper function that generates ready-to-use API code examples in different formats (OpenAI SDK, curl, Python requests). Returns templated code with the model ID substituted, used by get_model_info to include practical usage examples.
    export function getApiExample(
      format: string,
      modelId: string
    ): { code: string; language: string } | null {
      const example = EXAMPLES[format];
      if (!example) return null;
      return {
        code: example.template.replace(/\{\{MODEL_ID\}\}/g, modelId),
        language: example.language,
      };
    }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Daichi-Kudo/llm-advisor-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server