Skip to main content
Glama

mistral-7b-instruct vs open-mixtral-8x7b

Pricing, Performance & Features Comparison

Price unit:
Authormistral
Context Length32K
Reasoning
-
Providers1
ReleasedSep 2023
Knowledge Cutoff-
License-

The mistralai/mistral-7b-instruct series is a 7B-parameter language model fine-tuned for instruction-based tasks. It supports an extended context window (up to 32K tokens) and can handle function calling, demonstrating strong instruct performance. As an early demonstration, it lacks built-in content moderation mechanisms.

Input$0.03
Output$0.055
Latency (p50)-
Output Limit256
Function Calling
JSON Mode
-
InputText
OutputText
in$0.03out$0.055--
Authormistral
Context Length33K
Reasoning
-
Providers1
ReleasedFeb 2024
Knowledge CutoffDec 2023
License-

Mixtral 8x7B is a high-quality sparse mixture of experts (SMoE) large language model with open weights, released under Apache 2.0 license. Despite having 45 billion parameters, its architecture requires compute equivalent to a 14 billion parameter model, enabling 6x faster inference and strong performance that outperforms Llama 2 70B and matches or exceeds GPT-3.5 on many benchmarks.

Input$0.7
Output$0.7
Latency (p50)744ms
Output Limit4K
Function Calling
JSON Mode
InputText
OutputText
in$0.7out$0.7--
Latency (24h)
Success Rate (24h)