Skip to main content
Glama

open-mixtral-8x22b vs phi-3-mini-128k-instruct

Pricing, Performance & Features Comparison

Price unit:
Authormistral
Context Length64K
Reasoning
-
Providers1
ReleasedApr 2024
Knowledge CutoffSep 2021
License-

Mixtral 8x22B is Mistral AI's latest open, sparse Mixture-of-Experts (SMoE) model that uses only 39B active parameters out of 141B for unparalleled cost efficiency. It features a 64K tokens context window, strong capabilities in mathematics and coding, and native function calling, while also being fluent in multiple European languages.

Input$2
Output$6
Latency (p50)732ms
Output Limit4K
Function Calling
JSON Mode
InputText
OutputText
in$2out$6--
Latency (24h)
Success Rate (24h)
Authormicrosoft
Context Length128K
Reasoning
-
Providers1
ReleasedApr 2024
Knowledge CutoffOct 2023
LicenseMIT License

Phi-3-mini-128k-instruct is a 3.8 billion-parameter instruction-tuned language model with strong reasoning and logic capabilities. It excels at tasks such as coding, mathematics, content generation, and summarization. Designed for memory/compute-constrained environments, it offers a large 128k token context window for handling extended text input.

Input$0.00
Output$0.00
Latency (p50)-
Output Limit4K
Function Calling
-
JSON Mode
-
InputText
OutputText
in$0.00out$0.00--