chat_with_model
Send messages to AI models via OpenRouter to generate responses, compare outputs, and retrieve model information with pricing details.
Instructions
Send a message to a specific OpenRouter model
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | Yes | OpenRouter model ID (e.g., 'openai/gpt-4') | |
| message | Yes | Message to send to the model | |
| max_tokens | No | Maximum tokens in response | |
| temperature | No | Temperature for response randomness | |
| system_prompt | No | System prompt for the conversation |
Implementation Reference
- src/server.ts:330-361 (handler)The main handler function that executes the chat_with_model tool. It validates parameters using ChatRequestSchema, constructs the messages array (including optional system prompt), makes an API call to OpenRouter's chat/completions endpoint, and returns the response with usage statistics.
private async chatWithModel(params: z.infer<typeof ChatRequestSchema>) { const { model, message, max_tokens, temperature, system_prompt } = params; const messages = []; if (system_prompt) { messages.push({ role: "system", content: system_prompt }); } messages.push({ role: "user", content: message }); const response = await axios.post( `${OPENROUTER_CONFIG.baseURL}/chat/completions`, { model, messages, max_tokens, temperature, }, { headers: OPENROUTER_CONFIG.headers } ); const result = response.data.choices[0].message.content; const usage = response.data.usage; return { content: [ { type: "text" as const, text: `**Model:** ${model}\n**Response:** ${result}\n\n**Usage:**\n- Prompt tokens: ${usage.prompt_tokens}\n- Completion tokens: ${usage.completion_tokens}\n- Total tokens: ${usage.total_tokens}`, }, ], }; } - src/server.ts:19-25 (schema)Zod schema defining the input validation for chat_with_model tool. Includes required fields (model, message) and optional fields (max_tokens, temperature, system_prompt) with defaults.
const ChatRequestSchema = z.object({ model: z.string().describe("OpenRouter model ID (e.g., 'openai/gpt-4')"), message: z.string().describe("Message to send to the model"), max_tokens: z.number().optional().default(1000).describe("Maximum tokens in response"), temperature: z.number().optional().default(0.7).describe("Temperature for response randomness"), system_prompt: z.string().optional().describe("System prompt for the conversation"), }); - src/server.ts:145-176 (registration)Tool registration in the ListToolsRequestSchema handler. Defines the tool's metadata, input schema structure, required parameters, and default values for MCP clients to discover.
{ name: "chat_with_model", description: "Send a message to a specific OpenRouter model", inputSchema: { type: "object", properties: { model: { type: "string", description: "OpenRouter model ID (e.g., 'openai/gpt-4')", }, message: { type: "string", description: "Message to send to the model", }, max_tokens: { type: "number", description: "Maximum tokens in response", default: 1000, }, temperature: { type: "number", description: "Temperature for response randomness", default: 0.7, }, system_prompt: { type: "string", description: "System prompt for the conversation", }, }, required: ["model", "message"], }, }, - src/server.ts:229-230 (registration)Tool dispatch logic in the CallToolRequestSchema handler. Routes the chat_with_model tool invocation to the chatWithModel handler method with parameter validation.
case "chat_with_model": return await this.chatWithModel(ChatRequestSchema.parse(args)); - src/server.ts:34-43 (helper)Configuration object used by the chatWithModel handler for API requests to OpenRouter. Contains base URL, API key, and headers required for authentication.
const OPENROUTER_CONFIG = { baseURL: process.env.OPENROUTER_BASE_URL || "https://openrouter.ai/api/v1", apiKey: process.env.OPENROUTER_API_KEY, headers: { "Authorization": `Bearer ${process.env.OPENROUTER_API_KEY}`, "HTTP-Referer": process.env.OPENROUTER_SITE_URL || "http://localhost:3000", "X-Title": process.env.OPENROUTER_APP_NAME || "OpenRouter MCP Server", "Content-Type": "application/json", }, };