Skip to main content
Glama

chat_completion

Generate AI responses using Grok models by providing conversation messages and parameters for customized chat interactions.

Instructions

Generate a response using Grok AI chat completion

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
max_tokensNoMaximum number of tokens to generate
messagesYesArray of message objects with role and content
modelNoGrok model to use (e.g., grok-2-latest, grok-3, grok-3-reasoner, grok-3-deepsearch, grok-3-mini-beta)grok-3-mini-beta
temperatureNoSampling temperature (0-2)

Implementation Reference

  • The primary handler function for the 'chat_completion' tool. It validates input arguments, prepares options, calls the Grok API client to generate a chat completion, and returns the formatted response.
    private async handleChatCompletion(args: any) { console.error('[Tool] Handling chat_completion tool call'); const { messages, model, temperature, max_tokens, ...otherOptions } = args; // Validate messages if (!Array.isArray(messages) || messages.length === 0) { throw new Error('Messages must be a non-empty array'); } // Create options object const options = { model: model || 'grok-2-latest', temperature: temperature !== undefined ? temperature : 1, max_tokens: max_tokens !== undefined ? max_tokens : 16384, ...otherOptions }; // Call Grok API const response = await this.grokClient.createChatCompletion(messages, options); return { content: [ { type: 'text', text: response.choices[0].message.content, }, ], }; }
  • src/index.ts:61-106 (registration)
    Tool registration in the ListToolsRequestHandler, defining the name, description, and input schema for 'chat_completion'.
    { name: 'chat_completion', description: 'Generate a response using Grok AI chat completion', inputSchema: { type: 'object', properties: { messages: { type: 'array', description: 'Array of message objects with role and content', items: { type: 'object', properties: { role: { type: 'string', description: 'Role of the message sender (system, user, assistant)', enum: ['system', 'user', 'assistant'] }, content: { type: 'string', description: 'Content of the message' } }, required: ['role', 'content'] } }, model: { type: 'string', description: 'Grok model to use (e.g., grok-2-latest, grok-3, grok-3-reasoner, grok-3-deepsearch, grok-3-mini-beta)', default: 'grok-3-mini-beta' }, temperature: { type: 'number', description: 'Sampling temperature (0-2)', minimum: 0, maximum: 2, default: 1 }, max_tokens: { type: 'integer', description: 'Maximum number of tokens to generate', default: 16384 } }, required: ['messages'] } },
  • Input schema definition for the 'chat_completion' tool, specifying the structure and types for messages, model, temperature, and max_tokens.
    inputSchema: { type: 'object', properties: { messages: { type: 'array', description: 'Array of message objects with role and content', items: { type: 'object', properties: { role: { type: 'string', description: 'Role of the message sender (system, user, assistant)', enum: ['system', 'user', 'assistant'] }, content: { type: 'string', description: 'Content of the message' } }, required: ['role', 'content'] } }, model: { type: 'string', description: 'Grok model to use (e.g., grok-2-latest, grok-3, grok-3-reasoner, grok-3-deepsearch, grok-3-mini-beta)', default: 'grok-3-mini-beta' }, temperature: { type: 'number', description: 'Sampling temperature (0-2)', minimum: 0, maximum: 2, default: 1 }, max_tokens: { type: 'integer', description: 'Maximum number of tokens to generate', default: 16384 } }, required: ['messages'] }
  • Helper method in GrokApiClient that performs the actual HTTP POST request to the xAI API's /chat/completions endpoint, invoked by the handler.
    async createChatCompletion(messages: any[], options: any = {}): Promise<any> { try { console.error('[API] Creating chat completion...'); const requestBody = { messages, model: options.model || 'grok-3-mini-beta', ...options }; const response = await this.axiosInstance.post('/chat/completions', requestBody); return response.data; } catch (error) { console.error('[Error] Failed to create chat completion:', error); throw error; } }

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Bob-lance/grok-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server