Skip to main content
Glama

create_completion

Generate text completions using the Grok API by specifying a model, prompt, and parameters like temperature, max tokens, and penalties to tailor output effectively.

Instructions

Create a text completion with the Grok API

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
best_ofNoGenerate best_of completions server-side and return the best one
echoNoEcho back the prompt in addition to the completion
frequency_penaltyNoPenalty for new tokens based on frequency in text (-2 to 2)
logit_biasNoMap of token IDs to bias scores (-100 to 100) that influence generation
logprobsNoInclude log probabilities on most likely tokens (0-5)
max_tokensNoMaximum number of tokens to generate
modelYesID of the model to use
nNoNumber of completions to generate
presence_penaltyNoPenalty for new tokens based on presence in text (-2 to 2)
promptYesThe prompt(s) to generate completions for
seedNoIf specified, results will be more deterministic when the same seed is used
stopNoSequences where the API will stop generating further tokens
streamNoWhether to stream back partial progress
suffixNoThe suffix that comes after a completion of inserted text
temperatureNoSampling temperature (0-2)
top_pNoNucleus sampling parameter (0-1)
userNoA unique user identifier

Implementation Reference

  • Core handler function that performs the POST request to the Grok completions endpoint and validates the response.
    export async function createCompletion( options: z.infer<typeof CompletionsRequestSchema> ): Promise<z.infer<typeof CompletionsResponseSchema>> { const response = await grokRequest("completions", { method: "POST", body: options, }); return CompletionsResponseSchema.parse(response); }
  • Zod schema defining the input parameters for the create_completion tool.
    export const CompletionsRequestSchema = z.object({ model: z.string().describe("ID of the model to use"), prompt: z .union([z.string(), z.array(z.string())]) .describe("The prompt(s) to generate completions for"), suffix: z .string() .optional() .describe("The suffix that comes after a completion of inserted text"), max_tokens: z .number() .int() .positive() .optional() .describe("Maximum number of tokens to generate"), temperature: z .number() .min(0) .max(2) .optional() .describe("Sampling temperature (0-2)"), top_p: z .number() .min(0) .max(1) .optional() .describe("Nucleus sampling parameter (0-1)"), n: z .number() .int() .positive() .optional() .describe("Number of completions to generate"), stream: z .boolean() .optional() .describe("Whether to stream back partial progress"), logprobs: z .number() .int() .min(0) .max(5) .optional() .describe("Include log probabilities on most likely tokens (0-5)"), echo: z .boolean() .optional() .describe("Echo back the prompt in addition to the completion"), stop: z .union([z.string(), z.array(z.string())]) .optional() .describe("Sequences where the API will stop generating further tokens"), presence_penalty: z .number() .min(-2) .max(2) .optional() .describe("Penalty for new tokens based on presence in text (-2 to 2)"), frequency_penalty: z .number() .min(-2) .max(2) .optional() .describe("Penalty for new tokens based on frequency in text (-2 to 2)"), logit_bias: z .record(z.string(), z.number()) .optional() .describe( "Map of token IDs to bias scores (-100 to 100) that influence generation" ), seed: z .number() .int() .optional() .describe( "If specified, results will be more deterministic when the same seed is used" ), best_of: z .number() .int() .positive() .optional() .describe( "Generate best_of completions server-side and return the best one" ), user: z.string().optional().describe("A unique user identifier"), });
  • index.ts:124-136 (registration)
    Registration of the create_completion tool in the FastMCP server, which delegates to the handler function.
    server.addTool({ name: "create_completion", description: "Create a text completion with the Grok API", parameters: completions.CompletionsRequestSchema, execute: async (args) => { try { const result = await completions.createCompletion(args); return JSON.stringify(result, null, 2); } catch (err) { handleError(err); } }, });
  • Zod schema for validating the response from the completions API.
    const CompletionsResponseSchema = z.object({ id: z.string(), object: z.string(), created: z.number(), model: z.string(), choices: z.array(CompletionsChoiceSchema), usage: z.object({ prompt_tokens: z.number(), completion_tokens: z.number(), total_tokens: z.number(), }), });

Other Tools

Related Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/BrewMyTech/grok-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server