Skip to main content
Glama
th3nolo

OpenRouter MCP Server

by th3nolo

compare_models

Compare AI model responses side-by-side by sending the same prompt to multiple models for evaluation and analysis.

Instructions

Compare responses from multiple models

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelsYesArray of model IDs to compare
messageYesMessage to send to all models
max_tokensNoMaximum tokens per response

Implementation Reference

  • The main implementation of compareModels tool that sends the same message to multiple models in parallel and formats the comparison results
    private async compareModels(params: z.infer<typeof CompareModelsSchema>) {
      const { models, message, max_tokens } = params;
    
      const promises = models.map(async (model) => {
        try {
          const response = await axios.post(
            `${OPENROUTER_CONFIG.baseURL}/chat/completions`,
            {
              model,
              messages: [{ role: "user", content: message }],
              max_tokens,
            },
            { headers: OPENROUTER_CONFIG.headers }
          );
    
          return {
            model,
            response: response.data.choices[0].message.content,
            usage: response.data.usage,
            success: true,
          };
        } catch (error) {
          return {
            model,
            error: error instanceof Error ? error.message : "Unknown error",
            success: false,
          };
        }
      });
    
      const results = await Promise.all(promises);
    
      const formattedResults = results
        .map((result) => {
          if (result.success) {
            return `**${result.model}:**\n${result.response}\n*Tokens: ${result.usage.total_tokens}*`;
          } else {
            return `**${result.model}:** ❌ Error - ${result.error}`;
          }
        })
        .join("\n\n---\n\n");
    
      return {
        content: [
          {
            type: "text" as const,
            text: `Comparison of ${models.length} models:\n\n${formattedResults}`,
          },
        ],
      };
    }
  • Zod schema defining the input validation for compare_models tool (models array, message, and optional max_tokens)
    const CompareModelsSchema = z.object({
      models: z.array(z.string()).describe("Array of model IDs to compare"),
      message: z.string().describe("Message to send to all models"),
      max_tokens: z.number().optional().default(500).describe("Maximum tokens per response"),
    });
  • src/server.ts:177-202 (registration)
    Tool registration in the MCP tools list with name 'compare_models', description, and input schema
    {
      name: "compare_models",
      description: "Compare responses from multiple models",
      inputSchema: {
        type: "object",
        properties: {
          models: {
            type: "array",
            items: {
              type: "string",
            },
            description: "Array of model IDs to compare",
          },
          message: {
            type: "string",
            description: "Message to send to all models",
          },
          max_tokens: {
            type: "number",
            description: "Maximum tokens per response",
            default: 500,
          },
        },
        required: ["models", "message"],
      },
    },
  • src/server.ts:231-232 (registration)
    Switch case handler that routes compare_models tool calls to the compareModels method
    case "compare_models":
      return await this.compareModels(CompareModelsSchema.parse(args));

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/th3nolo/openrouter-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server