create_completion
Generate text completions using the Grok API by specifying a model, prompt, and parameters like temperature, max tokens, and penalties to tailor output effectively.
Instructions
Create a text completion with the Grok API
Input Schema
Name | Required | Description | Default |
---|---|---|---|
best_of | No | Generate best_of completions server-side and return the best one | |
echo | No | Echo back the prompt in addition to the completion | |
frequency_penalty | No | Penalty for new tokens based on frequency in text (-2 to 2) | |
logit_bias | No | Map of token IDs to bias scores (-100 to 100) that influence generation | |
logprobs | No | Include log probabilities on most likely tokens (0-5) | |
max_tokens | No | Maximum number of tokens to generate | |
model | Yes | ID of the model to use | |
n | No | Number of completions to generate | |
presence_penalty | No | Penalty for new tokens based on presence in text (-2 to 2) | |
prompt | Yes | The prompt(s) to generate completions for | |
seed | No | If specified, results will be more deterministic when the same seed is used | |
stop | No | Sequences where the API will stop generating further tokens | |
stream | No | Whether to stream back partial progress | |
suffix | No | The suffix that comes after a completion of inserted text | |
temperature | No | Sampling temperature (0-2) | |
top_p | No | Nucleus sampling parameter (0-1) | |
user | No | A unique user identifier |