chat
Send messages to 627+ AI models including GPT-5, Claude, and Gemini through a single interface to get AI responses for various tasks.
Instructions
Send a message to any AI model via Crazyrouter. Supports 627+ models including GPT-5, Claude Opus 4.6, Gemini 3, DeepSeek R1, Llama 4, Qwen3, Grok 4, and more.
Input Schema
TableJSON Schema
| Name | Required | Description | Default |
|---|---|---|---|
| model | No | The AI model to use (default: gpt-5-mini). Examples: gpt-5, claude-opus-4-6, gemini-3-pro, deepseek-r1, llama-4-scout, qwen3-235b, grok-4 | gpt-5-mini |
| messages | Yes | Array of chat messages with role and content | |
| temperature | No | Sampling temperature (0-2). Lower = more deterministic, higher = more creative | |
| max_tokens | No | Maximum number of tokens to generate |
Implementation Reference
- src/index.ts:123-186 (handler)The "chat" tool registration and handler implementation using the Crazyrouter API to perform model completion.
server.tool( "chat", "Send a message to any AI model via Crazyrouter. Supports 627+ models including GPT-5, Claude Opus 4.6, Gemini 3, DeepSeek R1, Llama 4, Qwen3, Grok 4, and more.", { model: z .string() .default(DEFAULT_CHAT_MODEL) .describe( `The AI model to use (default: ${DEFAULT_CHAT_MODEL}). Examples: gpt-5, claude-opus-4-6, gemini-3-pro, deepseek-r1, llama-4-scout, qwen3-235b, grok-4` ), messages: z .array( z.object({ role: z .enum(["system", "user", "assistant"]) .describe("The role of the message sender"), content: z.string().describe("The message content"), }) ) .describe("Array of chat messages with role and content"), temperature: z .number() .min(0) .max(2) .optional() .describe( "Sampling temperature (0-2). Lower = more deterministic, higher = more creative" ), max_tokens: z .number() .optional() .describe("Maximum number of tokens to generate"), }, async ({ model, messages, temperature, max_tokens }) => { try { const body: Record<string, unknown> = { model, messages }; if (temperature !== undefined) body.temperature = temperature; if (max_tokens !== undefined) body.max_tokens = max_tokens; const result = (await apiRequest("/chat/completions", { method: "POST", body, })) as { choices?: Array<{ message?: { content?: string; role?: string }; finish_reason?: string }>; usage?: { prompt_tokens?: number; completion_tokens?: number; total_tokens?: number }; model?: string; }; const content = result.choices?.[0]?.message?.content ?? "No response content"; const usage = result.usage; const actualModel = result.model ?? model; let text = content; if (usage) { text += `\n\n---\n📊 Model: ${actualModel} | Tokens: ${usage.prompt_tokens ?? "?"}→${usage.completion_tokens ?? "?"} (${usage.total_tokens ?? "?"} total)`; } return { content: [{ type: "text" as const, text }] }; } catch (error) { const message = error instanceof Error ? error.message : "Unknown error occurred"; return { content: [{ type: "text" as const, text: `Error: ${message}` }], isError: true }; } } );