create_completion
Generate text completions using the Grok API by specifying a model, prompt, and parameters like temperature, max tokens, and penalties to tailor output effectively.
Instructions
Create a text completion with the Grok API
Input Schema
| Name | Required | Description | Default | 
|---|---|---|---|
| best_of | No | Generate best_of completions server-side and return the best one | |
| echo | No | Echo back the prompt in addition to the completion | |
| frequency_penalty | No | Penalty for new tokens based on frequency in text (-2 to 2) | |
| logit_bias | No | Map of token IDs to bias scores (-100 to 100) that influence generation | |
| logprobs | No | Include log probabilities on most likely tokens (0-5) | |
| max_tokens | No | Maximum number of tokens to generate | |
| model | Yes | ID of the model to use | |
| n | No | Number of completions to generate | |
| presence_penalty | No | Penalty for new tokens based on presence in text (-2 to 2) | |
| prompt | Yes | The prompt(s) to generate completions for | |
| seed | No | If specified, results will be more deterministic when the same seed is used | |
| stop | No | Sequences where the API will stop generating further tokens | |
| stream | No | Whether to stream back partial progress | |
| suffix | No | The suffix that comes after a completion of inserted text | |
| temperature | No | Sampling temperature (0-2) | |
| top_p | No | Nucleus sampling parameter (0-1) | |
| user | No | A unique user identifier | 
Input Schema (JSON Schema)
{
  "$schema": "http://json-schema.org/draft-07/schema#",
  "additionalProperties": false,
  "properties": {
    "best_of": {
      "description": "Generate best_of completions server-side and return the best one",
      "exclusiveMinimum": 0,
      "type": "integer"
    },
    "echo": {
      "description": "Echo back the prompt in addition to the completion",
      "type": "boolean"
    },
    "frequency_penalty": {
      "description": "Penalty for new tokens based on frequency in text (-2 to 2)",
      "maximum": 2,
      "minimum": -2,
      "type": "number"
    },
    "logit_bias": {
      "additionalProperties": {
        "type": "number"
      },
      "description": "Map of token IDs to bias scores (-100 to 100) that influence generation",
      "type": "object"
    },
    "logprobs": {
      "description": "Include log probabilities on most likely tokens (0-5)",
      "maximum": 5,
      "minimum": 0,
      "type": "integer"
    },
    "max_tokens": {
      "description": "Maximum number of tokens to generate",
      "exclusiveMinimum": 0,
      "type": "integer"
    },
    "model": {
      "description": "ID of the model to use",
      "type": "string"
    },
    "n": {
      "description": "Number of completions to generate",
      "exclusiveMinimum": 0,
      "type": "integer"
    },
    "presence_penalty": {
      "description": "Penalty for new tokens based on presence in text (-2 to 2)",
      "maximum": 2,
      "minimum": -2,
      "type": "number"
    },
    "prompt": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "items": {
            "type": "string"
          },
          "type": "array"
        }
      ],
      "description": "The prompt(s) to generate completions for"
    },
    "seed": {
      "description": "If specified, results will be more deterministic when the same seed is used",
      "type": "integer"
    },
    "stop": {
      "anyOf": [
        {
          "type": "string"
        },
        {
          "items": {
            "type": "string"
          },
          "type": "array"
        }
      ],
      "description": "Sequences where the API will stop generating further tokens"
    },
    "stream": {
      "description": "Whether to stream back partial progress",
      "type": "boolean"
    },
    "suffix": {
      "description": "The suffix that comes after a completion of inserted text",
      "type": "string"
    },
    "temperature": {
      "description": "Sampling temperature (0-2)",
      "maximum": 2,
      "minimum": 0,
      "type": "number"
    },
    "top_p": {
      "description": "Nucleus sampling parameter (0-1)",
      "maximum": 1,
      "minimum": 0,
      "type": "number"
    },
    "user": {
      "description": "A unique user identifier",
      "type": "string"
    }
  },
  "required": [
    "model",
    "prompt"
  ],
  "type": "object"
}