Skip to main content
Glama
th3nolo

OpenRouter MCP Server

by th3nolo

chat_with_model

Send messages to AI models via OpenRouter to generate responses, compare outputs, and retrieve model information with pricing details.

Instructions

Send a message to a specific OpenRouter model

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelYesOpenRouter model ID (e.g., 'openai/gpt-4')
messageYesMessage to send to the model
max_tokensNoMaximum tokens in response
temperatureNoTemperature for response randomness
system_promptNoSystem prompt for the conversation

Implementation Reference

  • The main handler function that executes the chat_with_model tool. It validates parameters using ChatRequestSchema, constructs the messages array (including optional system prompt), makes an API call to OpenRouter's chat/completions endpoint, and returns the response with usage statistics.
    private async chatWithModel(params: z.infer<typeof ChatRequestSchema>) {
      const { model, message, max_tokens, temperature, system_prompt } = params;
    
      const messages = [];
      if (system_prompt) {
        messages.push({ role: "system", content: system_prompt });
      }
      messages.push({ role: "user", content: message });
    
      const response = await axios.post(
        `${OPENROUTER_CONFIG.baseURL}/chat/completions`,
        {
          model,
          messages,
          max_tokens,
          temperature,
        },
        { headers: OPENROUTER_CONFIG.headers }
      );
    
      const result = response.data.choices[0].message.content;
      const usage = response.data.usage;
    
      return {
        content: [
          {
            type: "text" as const,
            text: `**Model:** ${model}\n**Response:** ${result}\n\n**Usage:**\n- Prompt tokens: ${usage.prompt_tokens}\n- Completion tokens: ${usage.completion_tokens}\n- Total tokens: ${usage.total_tokens}`,
          },
        ],
      };
    }
  • Zod schema defining the input validation for chat_with_model tool. Includes required fields (model, message) and optional fields (max_tokens, temperature, system_prompt) with defaults.
    const ChatRequestSchema = z.object({
      model: z.string().describe("OpenRouter model ID (e.g., 'openai/gpt-4')"),
      message: z.string().describe("Message to send to the model"),
      max_tokens: z.number().optional().default(1000).describe("Maximum tokens in response"),
      temperature: z.number().optional().default(0.7).describe("Temperature for response randomness"),
      system_prompt: z.string().optional().describe("System prompt for the conversation"),
    });
  • src/server.ts:145-176 (registration)
    Tool registration in the ListToolsRequestSchema handler. Defines the tool's metadata, input schema structure, required parameters, and default values for MCP clients to discover.
    {
      name: "chat_with_model",
      description: "Send a message to a specific OpenRouter model",
      inputSchema: {
        type: "object",
        properties: {
          model: {
            type: "string",
            description: "OpenRouter model ID (e.g., 'openai/gpt-4')",
          },
          message: {
            type: "string",
            description: "Message to send to the model",
          },
          max_tokens: {
            type: "number",
            description: "Maximum tokens in response",
            default: 1000,
          },
          temperature: {
            type: "number",
            description: "Temperature for response randomness",
            default: 0.7,
          },
          system_prompt: {
            type: "string",
            description: "System prompt for the conversation",
          },
        },
        required: ["model", "message"],
      },
    },
  • src/server.ts:229-230 (registration)
    Tool dispatch logic in the CallToolRequestSchema handler. Routes the chat_with_model tool invocation to the chatWithModel handler method with parameter validation.
    case "chat_with_model":
      return await this.chatWithModel(ChatRequestSchema.parse(args));
  • Configuration object used by the chatWithModel handler for API requests to OpenRouter. Contains base URL, API key, and headers required for authentication.
    const OPENROUTER_CONFIG = {
      baseURL: process.env.OPENROUTER_BASE_URL || "https://openrouter.ai/api/v1",
      apiKey: process.env.OPENROUTER_API_KEY,
      headers: {
        "Authorization": `Bearer ${process.env.OPENROUTER_API_KEY}`,
        "HTTP-Referer": process.env.OPENROUTER_SITE_URL || "http://localhost:3000",
        "X-Title": process.env.OPENROUTER_APP_NAME || "OpenRouter MCP Server",
        "Content-Type": "application/json",
      },
    };

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/th3nolo/openrouter-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server