Skip to main content
Glama

mixtral-8x22b-instruct vs open-mixtral-8x22b

Pricing, Performance & Features Comparison

Price unit:
Authormistral
Context Length66K
Reasoning
-
Providers1
ReleasedApr 2024
Knowledge Cutoff-
LicenseApache License 2.0

Mixtral-8x22B-Instruct is a mixture-of-experts large language model fine-tuned for following instructions and performing tasks such as code generation, function calling, and multilingual text processing. It achieves strong results on math and coding benchmarks and supports up to a 64k-token context window for large document processing. This model is optimized for reasoning, cost efficiency, and ease of deployment.

Input$0.9
Output$0.9
Latency (p50)-
Output Limit1K
Function Calling
JSON Mode
-
InputText
OutputText
in$0.9out$0.9--
Authormistral
Context Length64K
Reasoning
-
Providers1
ReleasedApr 2024
Knowledge CutoffSep 2021
License-

Mixtral 8x22B is Mistral AI's latest open, sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B for unparalleled cost efficiency. It features a 64K tokens context window, strong capabilities in mathematics and coding, and native function calling, while also being fluent in multiple European languages.

Input$2
Output$6
Latency (p50)736ms
Output Limit4K
Function Calling
JSON Mode
InputText
OutputText
in$2out$6--
Latency (24h)
Success Rate (24h)