Skip to main content
Glama

jamba-1-5-large vs phi-3.5-mini-128k-instruct

Pricing, Performance & Features Comparison

Price unit:
Authorai21
Context Length256K
Reasoning
-
Providers1
ReleasedAug 2024
Knowledge CutoffMar 2024
License-

AI21’s Jamba-1.5-Large is a high-performance, instruction-following LLM that excels in long-context handling and offers robust capabilities for tasks like text generation, summarization, and QA. It features novel Mamba-Transformer Mixture of Experts technology to deliver both scalability and speed. The model is optimized for enterprise applications, providing tools such as function calling and structured JSON output.

Input$2
Output$8
Latency (p50)-
Output Limit4K
Function Calling
JSON Mode
InputText
OutputText
in$2out$8--
Authormicrosoft
Context Length128K
Reasoning
-
Providers1
ReleasedAug 2024
Knowledge CutoffOct 2023
LicenseMIT License

Phi-3.5-mini-128k-instruct is a compact, advanced language model that can handle up to 128K tokens in context, enabling tasks such as lengthy document summarization, multi-turn conversation, and complex reasoning. It is optimized for logic, code, and math, and provides robust multi-lingual capabilities. With 3.8 billion parameters, it uses a dense decoder-only transformer architecture that balances performance and efficiency.

Input$0.1
Output$0.1
Latency (p50)-
Output Limit128K
Function Calling
-
JSON Mode
-
InputText
OutputText
in$0.1out$0.1--