Skip to main content
Glama

Mistral chat completion

mistral_chat
Read-only

Generate chat completions from Mistral models for content drafting, code generation, or classification. Returns structured output with assistant text and token usage.

Instructions

Generate a chat completion using a Mistral model.

When to use:

  • Drafting French (or any European-language) content where Mistral shines.

  • Codestral for code-specific generation/review.

  • Ministral for cheap / low-latency classification.

Returns structured content with the assistant text and token usage. Does NOT stream — use mistral_chat_stream for long outputs with progress updates.

Input Schema

TableJSON Schema
NameRequiredDescriptionDefault
messagesYesChat messages in role/content form.
modelNoMistral chat model alias. Allowed: mistral-large-latest, mistral-medium-latest, mistral-small-latest, ministral-3b-latest, ministral-8b-latest, ministral-14b-latest, magistral-medium-latest, magistral-small-latest, devstral-latest, devstral-small-latest, codestral-latest, voxtral-small-latest. Default: mistral-medium-latest.
response_formatNoForce a structured output: `{type:"json_object"}` for JSON mode, `{type:"json_schema", json_schema:{...}}` for strict schema mode.
reasoning_effortNoControls reasoning depth for Magistral models. 'high' enables full chain-of-thought; 'none' disables it. Ignored on non-reasoning models.
temperatureNo
max_tokensNo
top_pNo
seedNoRandom seed for deterministic sampling. Maps to Mistral's `random_seed`.

Output Schema

TableJSON Schema
NameRequiredDescriptionDefault
textYes
modelYes
usageNo
finish_reasonNo
reasoning_contentNoReasoning trace returned by Magistral models. Absent for non-reasoning models.
Behavior4/5

Does the description disclose side effects, auth requirements, rate limits, or destructive behavior?

Annotations already provide readOnlyHint=true and destructiveHint=false. Description adds that it returns structured content with assistant text and token usage, and does not stream, providing useful behavioral context beyond annotations.

Agents need to know what a tool does to the world before calling it. Descriptions should go beyond structured annotations to explain consequences.

Conciseness5/5

Is the description appropriately sized, front-loaded, and free of redundancy?

Four sentences, front-loaded with purpose, followed by usage guidelines and behavioral notes. No wasted words; every sentence adds value.

Shorter descriptions cost fewer tokens and are easier for agents to parse. Every sentence should earn its place.

Completeness4/5

Given the tool's complexity, does the description cover enough for an agent to succeed on first attempt?

Output schema exists, so description need not detail return values. It covers purpose, usage context, alternatives, and a behavioral note (no streaming). For a tool with 8 params, it is fairly complete but could mention that the model parameter affects capabilities (e.g., reasoning models).

Complex tools with many parameters or behaviors need more documentation. Simple tools need less. This dimension scales expectations accordingly.

Parameters3/5

Does the description clarify parameter syntax, constraints, interactions, or defaults beyond what the schema provides?

Schema description coverage is 63% (5 of 8 parameters have descriptions in schema). Description does not add parameter-level semantics beyond what schema provides; it focuses on output and usage. Baseline of 3 is appropriate.

Input schemas describe structure but not intent. Descriptions should explain non-obvious parameter relationships and valid value ranges.

Purpose5/5

Does the description clearly state what the tool does and how it differs from similar tools?

Clearly states 'Generate a chat completion using a Mistral model' and distinguishes from siblings like Codestral for code and Ministral for cheap classification. Verb and resource are specific.

Agents choose between tools based on descriptions. A clear purpose with a specific verb and resource helps agents select the right tool.

Usage Guidelines5/5

Does the description explain when to use this tool, when not to, or what alternatives exist?

Explicitly suggests when to use (French/European language tasks), when to use alternatives (Codestral for code, Ministral for classification), and warns against using for streaming, directing to a streaming variant.

Agents often have multiple tools that could apply. Explicit usage guidance like "use X instead of Y when Z" prevents misuse.

Install Server

Other Tools

Latest Blog Posts

MCP directory API

We provide all the information about MCP servers via our MCP API.

curl -X GET 'https://glama.ai/api/mcp/v1/servers/Swih/mistral-mcp'

If you have feedback or need assistance with the MCP directory API, please join our Discord server