Skip to main content
Glama

open-mistral-7b vs open-mixtral-8x7b

Pricing, Performance & Features Comparison

Price unit:
Authormistral
Context Length33K
Reasoning
-
Providers1
ReleasedFeb 2024
Knowledge CutoffOct 2023
License-

Mistral 7B is a 7.3 billion parameter language model engineered to significantly outperform larger models like Llama 2 13B across all benchmarks and Llama 1 34B on many. It leverages advanced architectural features such as Grouped-query attention for faster inference and Sliding Window Attention for efficient handling of longer sequences, allowing it to approach CodeLlama 7B's performance on coding tasks while maintaining strong English language capabilities.

Input$0.25
Output$0.25
Latency (p50)695ms
Output Limit8K
Function Calling
JSON Mode
InputText
OutputText
in$0.25out$0.25--
Latency (24h)
Success Rate (24h)
Authormistral
Context Length33K
Reasoning
-
Providers1
ReleasedFeb 2024
Knowledge CutoffDec 2023
License-

Mixtral 8x7B is a high-quality sparse mixture of experts (SMoE) large language model with open weights, released under Apache 2.0 license. Despite having 45 billion parameters, its architecture requires compute equivalent to a 14 billion parameter model, enabling 6x faster inference and strong performance that outperforms Llama 2 70B and matches or exceeds GPT-3.5 on many benchmarks.

Input$0.7
Output$0.7
Latency (p50)730ms
Output Limit4K
Function Calling
JSON Mode
InputText
OutputText
in$0.7out$0.7--
Latency (24h)
Success Rate (24h)