Skip to main content
Glama
xujfcn
by xujfcn

chat

Send messages to 627+ AI models including GPT-5, Claude, and Gemini through a single interface to get AI responses for various tasks.

Instructions

Send a message to any AI model via Crazyrouter. Supports 627+ models including GPT-5, Claude Opus 4.6, Gemini 3, DeepSeek R1, Llama 4, Qwen3, Grok 4, and more.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelNoThe AI model to use (default: gpt-5-mini). Examples: gpt-5, claude-opus-4-6, gemini-3-pro, deepseek-r1, llama-4-scout, qwen3-235b, grok-4gpt-5-mini
messagesYesArray of chat messages with role and content
temperatureNoSampling temperature (0-2). Lower = more deterministic, higher = more creative
max_tokensNoMaximum number of tokens to generate

Implementation Reference

  • The "chat" tool registration and handler implementation using the Crazyrouter API to perform model completion.
    server.tool(
      "chat",
      "Send a message to any AI model via Crazyrouter. Supports 627+ models including GPT-5, Claude Opus 4.6, Gemini 3, DeepSeek R1, Llama 4, Qwen3, Grok 4, and more.",
      {
        model: z
          .string()
          .default(DEFAULT_CHAT_MODEL)
          .describe(
            `The AI model to use (default: ${DEFAULT_CHAT_MODEL}). Examples: gpt-5, claude-opus-4-6, gemini-3-pro, deepseek-r1, llama-4-scout, qwen3-235b, grok-4`
          ),
        messages: z
          .array(
            z.object({
              role: z
                .enum(["system", "user", "assistant"])
                .describe("The role of the message sender"),
              content: z.string().describe("The message content"),
            })
          )
          .describe("Array of chat messages with role and content"),
        temperature: z
          .number()
          .min(0)
          .max(2)
          .optional()
          .describe(
            "Sampling temperature (0-2). Lower = more deterministic, higher = more creative"
          ),
        max_tokens: z
          .number()
          .optional()
          .describe("Maximum number of tokens to generate"),
      },
      async ({ model, messages, temperature, max_tokens }) => {
        try {
          const body: Record<string, unknown> = { model, messages };
          if (temperature !== undefined) body.temperature = temperature;
          if (max_tokens !== undefined) body.max_tokens = max_tokens;
    
          const result = (await apiRequest("/chat/completions", {
            method: "POST",
            body,
          })) as {
            choices?: Array<{ message?: { content?: string; role?: string }; finish_reason?: string }>;
            usage?: { prompt_tokens?: number; completion_tokens?: number; total_tokens?: number };
            model?: string;
          };
    
          const content = result.choices?.[0]?.message?.content ?? "No response content";
          const usage = result.usage;
          const actualModel = result.model ?? model;
    
          let text = content;
          if (usage) {
            text += `\n\n---\nšŸ“Š Model: ${actualModel} | Tokens: ${usage.prompt_tokens ?? "?"}→${usage.completion_tokens ?? "?"} (${usage.total_tokens ?? "?"} total)`;
          }
    
          return { content: [{ type: "text" as const, text }] };
        } catch (error) {
          const message = error instanceof Error ? error.message : "Unknown error occurred";
          return { content: [{ type: "text" as const, text: `Error: ${message}` }], isError: true };
        }
      }
    );

Tool Definition Quality

Score is being calculated. Check back soon.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/xujfcn/crazyrouter-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server