Mistral chat completion
mistral_chatGenerate chat completions from Mistral models for content drafting, code generation, or classification. Returns structured output with assistant text and token usage.
Instructions
Generate a chat completion using a Mistral model.
When to use:
Drafting French (or any European-language) content where Mistral shines.
Codestral for code-specific generation/review.
Ministral for cheap / low-latency classification.
Returns structured content with the assistant text and token usage. Does NOT stream — use mistral_chat_stream for long outputs with progress updates.
Input Schema
| Name | Required | Description | Default |
|---|---|---|---|
| messages | Yes | Chat messages in role/content form. | |
| model | No | Mistral chat model alias. Allowed: mistral-large-latest, mistral-medium-latest, mistral-small-latest, ministral-3b-latest, ministral-8b-latest, ministral-14b-latest, magistral-medium-latest, magistral-small-latest, devstral-latest, devstral-small-latest, codestral-latest, voxtral-small-latest. Default: mistral-medium-latest. | |
| response_format | No | Force a structured output: `{type:"json_object"}` for JSON mode, `{type:"json_schema", json_schema:{...}}` for strict schema mode. | |
| reasoning_effort | No | Controls reasoning depth for Magistral models. 'high' enables full chain-of-thought; 'none' disables it. Ignored on non-reasoning models. | |
| temperature | No | ||
| max_tokens | No | ||
| top_p | No | ||
| seed | No | Random seed for deterministic sampling. Maps to Mistral's `random_seed`. |
Output Schema
| Name | Required | Description | Default |
|---|---|---|---|
| text | Yes | ||
| model | Yes | ||
| usage | No | ||
| finish_reason | No | ||
| reasoning_content | No | Reasoning trace returned by Magistral models. Absent for non-reasoning models. |