Skip to main content
Glama

deepseek-r1 vs deepseek-r1-distill-llama-70b

Pricing, Performance & Features Comparison

Price unit:
Authordeepseek
Context Length64K
Reasoning
-
Providers2
ReleasedJan 2015
Knowledge CutoffJul 2024
License-
Input$0.55
Output$2.2
Latency (p50)6.9s
Output Limit8K
Function Calling
-
JSON Mode
-
InputText
OutputText
deepseek
Cheapest
in$0.55out$2.2cache$0.14write$0.55
in$6.9out$6.9--
Latency (24h)
Success Rate (24h)
Authordeepseek
Context Length128K
Reasoning
-
Providers1
ReleasedJan 2015
Knowledge CutoffJul 2024
License-

DeepSeek-R1-Distill-Llama-70B is a highly efficient language model that leverages knowledge distillation to achieve state-of-the-art performance. This model distills the reasoning patterns of larger models into a smaller, more agile architecture, resulting in exceptional results on benchmarks like AIME 2024, MATH-500, and LiveCodeBench. With 70 billion parameters, DeepSeek-R1-Distill-Llama-70B offers a unique balance of accuracy and efficiency, making it an ideal choice for a wide range of natural language processing tasks.

Input$0.55
Output$2.2
Latency (p50)-
Output Limit8K
Function Calling
-
JSON Mode
-
InputText
OutputText
in$0.55out$2.2--