deepseek-v4-flash vs gpt-5.4-nano-2026-03-17
Pricing, Performance & Features Comparison
Context Length1M
Reasoning
Providers1
ReleasedApr 2026
Knowledge Cutoff-
LicenseMIT License
Mixture-of-Experts model with 284B total parameters and 13B activated per token. Features hybrid attention architecture for efficient 1M context processing.
Input$0.14
Output$0.28
Latency (p50)3.3s
Output Limit384K
Function Calling
JSON Mode
-
InputText
OutputText
in$0.14out$0.28cache$0.028write$0.14
Latency (24h)
Success Rate (24h)
Input$0.2
Output$1.3
Latency (p50)2.5s
Output Limit128K
Function Calling
JSON Mode
-
InputText, Image
OutputText
in$0.2out$1.3cache$0.02-