Skip to main content
Glama
ClawyPro

Clawy MCP Server

by ClawyPro

llm_chat

Get AI chat completions with smart model routing that automatically selects Claude, GPT, Gemini, or Llama based on task complexity. Pay per call with USDC credits without managing API keys.

Instructions

Smart-routed LLM chat completion. Automatically selects the optimal model (Claude, GPT, Gemini, Llama) based on task complexity. No API keys needed — pay per call with USDC credits.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
modelNoModel selection: 'auto' for smart routing (recommended), or specify a model directly
messagesYesChat messages array
temperatureNoSampling temperature (0-2, default 0.7)
max_tokensNoMaximum response tokens

Implementation Reference

  • The generic tool handler in src/index.ts executes the llm_chat tool by making an HTTP request to the endpoint defined in the tool configuration (src/llm/chat.ts).
      async (params) => {
        const method = tool.method || "POST";
        const result = await gatewayRequest(method, tool.endpoint, params as Record<string, unknown>);
    
        if (result.error) {
          return {
            content: [{ type: "text" as const, text: `Error (${result.status}): ${result.error}` }],
            isError: true,
          };
        }
    
        const text = typeof result.data === "string"
          ? result.data
          : JSON.stringify(result.data, null, 2);
    
        return {
          content: [{ type: "text" as const, text }],
        };
      },
    );
  • Schema definition for llm_chat input parameters using Zod.
      inputSchema: z.object({
        model: z.enum(["auto", "gpt-5-nano", "kimi-k2p5", "claude-opus-4-6"]).optional()
          .describe("Model selection: 'auto' for smart routing (recommended), or specify a model directly"),
        messages: z.array(z.object({
          role: z.enum(["system", "user", "assistant"]),
          content: z.string(),
        })).describe("Chat messages array"),
        temperature: z.number().optional().describe("Sampling temperature (0-2, default 0.7)"),
        max_tokens: z.number().optional().describe("Maximum response tokens"),
      }),
      endpoint: "/v1/llm/chat",
    },
  • src/llm/chat.ts:6-19 (registration)
    Registration and configuration of the llm_chat tool.
      name: "llm_chat",
      description: "Smart-routed LLM chat completion. Automatically selects the optimal model (Claude, GPT, Gemini, Llama) based on task complexity. No API keys needed — pay per call with USDC credits.",
      inputSchema: z.object({
        model: z.enum(["auto", "gpt-5-nano", "kimi-k2p5", "claude-opus-4-6"]).optional()
          .describe("Model selection: 'auto' for smart routing (recommended), or specify a model directly"),
        messages: z.array(z.object({
          role: z.enum(["system", "user", "assistant"]),
          content: z.string(),
        })).describe("Chat messages array"),
        temperature: z.number().optional().describe("Sampling temperature (0-2, default 0.7)"),
        max_tokens: z.number().optional().describe("Maximum response tokens"),
      }),
      endpoint: "/v1/llm/chat",
    },

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/ClawyPro/clawy-mcp-server'

If you have feedback or need assistance with the MCP directory API, please join our Discord server