Skip to main content
Glama

claude-3-opus-20240229 vs open-mixtral-8x7b

Pricing, Performance & Features Comparison

Price unit:
Authoranthropic
Context Length200K
Reasoning
-
Providers1
ReleasedFeb 2024
Knowledge CutoffAug 2023
License-

Claude 3 Opus is the most advanced model in the Claude 3 family, featuring near-human comprehension and robust handling of complex tasks across languages and formats. It demonstrates strong performance on benchmarks like MMLU, GPQA, and GSM8K, and can process context windows up to one million tokens for specific use cases. The model is tuned for reduced biases and improved accuracy, making it well-suited for challenging scenarios and responsible deployments.

Input$15
Output$75
Latency (p50)-
Output Limit4K
Function Calling
JSON Mode
InputText, Image
OutputText
in$15out$75cache$1.5write$19
Authormistral
Context Length33K
Reasoning
-
Providers1
ReleasedFeb 2024
Knowledge CutoffDec 2023
License-

Mixtral 8x7B is a high-quality sparse mixture of experts (SMoE) large language model with open weights, released under Apache 2.0 license. Despite having 45 billion parameters, its architecture requires compute equivalent to a 14 billion parameter model, enabling 6x faster inference and strong performance that outperforms Llama 2 70B and matches or exceeds GPT-3.5 on many benchmarks.

Input$0.7
Output$0.7
Latency (p50)744ms
Output Limit4K
Function Calling
JSON Mode
InputText
OutputText
in$0.7out$0.7--
Latency (24h)
Success Rate (24h)