Skip to main content
Glama

Grok MCP Server

by BrewMyTech

create_completion

Generate text completions using the Grok API by specifying a model, prompt, and parameters like temperature, max tokens, and penalties to tailor output effectively.

Instructions

Create a text completion with the Grok API

Input Schema

NameRequiredDescriptionDefault
best_ofNoGenerate best_of completions server-side and return the best one
echoNoEcho back the prompt in addition to the completion
frequency_penaltyNoPenalty for new tokens based on frequency in text (-2 to 2)
logit_biasNoMap of token IDs to bias scores (-100 to 100) that influence generation
logprobsNoInclude log probabilities on most likely tokens (0-5)
max_tokensNoMaximum number of tokens to generate
modelYesID of the model to use
nNoNumber of completions to generate
presence_penaltyNoPenalty for new tokens based on presence in text (-2 to 2)
promptYesThe prompt(s) to generate completions for
seedNoIf specified, results will be more deterministic when the same seed is used
stopNoSequences where the API will stop generating further tokens
streamNoWhether to stream back partial progress
suffixNoThe suffix that comes after a completion of inserted text
temperatureNoSampling temperature (0-2)
top_pNoNucleus sampling parameter (0-1)
userNoA unique user identifier

Input Schema (JSON Schema)

{ "$schema": "http://json-schema.org/draft-07/schema#", "additionalProperties": false, "properties": { "best_of": { "description": "Generate best_of completions server-side and return the best one", "exclusiveMinimum": 0, "type": "integer" }, "echo": { "description": "Echo back the prompt in addition to the completion", "type": "boolean" }, "frequency_penalty": { "description": "Penalty for new tokens based on frequency in text (-2 to 2)", "maximum": 2, "minimum": -2, "type": "number" }, "logit_bias": { "additionalProperties": { "type": "number" }, "description": "Map of token IDs to bias scores (-100 to 100) that influence generation", "type": "object" }, "logprobs": { "description": "Include log probabilities on most likely tokens (0-5)", "maximum": 5, "minimum": 0, "type": "integer" }, "max_tokens": { "description": "Maximum number of tokens to generate", "exclusiveMinimum": 0, "type": "integer" }, "model": { "description": "ID of the model to use", "type": "string" }, "n": { "description": "Number of completions to generate", "exclusiveMinimum": 0, "type": "integer" }, "presence_penalty": { "description": "Penalty for new tokens based on presence in text (-2 to 2)", "maximum": 2, "minimum": -2, "type": "number" }, "prompt": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" } ], "description": "The prompt(s) to generate completions for" }, "seed": { "description": "If specified, results will be more deterministic when the same seed is used", "type": "integer" }, "stop": { "anyOf": [ { "type": "string" }, { "items": { "type": "string" }, "type": "array" } ], "description": "Sequences where the API will stop generating further tokens" }, "stream": { "description": "Whether to stream back partial progress", "type": "boolean" }, "suffix": { "description": "The suffix that comes after a completion of inserted text", "type": "string" }, "temperature": { "description": "Sampling temperature (0-2)", "maximum": 2, "minimum": 0, "type": "number" }, "top_p": { "description": "Nucleus sampling parameter (0-1)", "maximum": 1, "minimum": 0, "type": "number" }, "user": { "description": "A unique user identifier", "type": "string" } }, "required": [ "model", "prompt" ], "type": "object" }
Install Server

Other Tools from Grok MCP Server

Related Tools

    MCP directory API

    We provide all the information about MCP servers via our MCP API.

    curl -X GET 'https://glama.ai/api/mcp/v1/servers/BrewMyTech/grok-mcp'

    If you have feedback or need assistance with the MCP directory API, please join our Discord server