command-a-03-2025 vs r1-1776
Pricing, Performance & Features Comparison
Command A is our most performant model to date, excelling at tool use, agents, retrieval augmented generation (RAG), and multilingual use cases. Command A has a context length of 256K, only requires two GPUs to run, and has 150% higher throughput compared to Command R+ 08-2024.
Input$2.5
Output$10
Latency (p50)2s
Output Limit-
Function Calling
-
JSON Mode
-
Input-
Output-
in$2.5out$10--
Latency (24h)
Success Rate (24h)
Context Length128K
Reasoning
Providers1
ReleasedFeb 2025
Knowledge CutoffOct 2023
License-
R1 1776 is a DeepSeek-R1 reasoning model that has been post-trained by Perplexity AI to remove Chinese Communist Party censorship.
Input$2
Output$8
Latency (p50)-
Output Limit8K
Function Calling
-
JSON Mode
-
InputText
OutputText
in$2out$8--