kimi-k2.6 vs deepseek-v4-pro
Pricing, Performance & Features Comparison
Context Length262K
Reasoning
Providers1
ReleasedApr 2026
Knowledge CutoffApr 2025
LicenseMIT License
Mixture-of-Experts model with 1T total parameters and 32B activated per token. Features MLA attention, MoonViT vision encoder, and agent swarm orchestration.
Input$0.95
Output$4
Latency (p50)5.4s
Output Limit66K
Function Calling
JSON Mode
InputText, Image, Video
OutputText
in$0.95out$4cache$0.16write$0.95
Latency (24h)
Success Rate (24h)
Context Length1M
Reasoning
Providers1
ReleasedApr 2026
Knowledge Cutoff-
LicenseMIT License
Flagship Mixture-of-Experts model with 1.6T total parameters and 49B activated per token. Trained on 32T+ tokens with hybrid attention for efficient 1M context processing.
Input$1.7
Output$3.5
Latency (p50)4.4s
Output Limit384K
Function Calling
JSON Mode
-
InputText
OutputText
in$1.7out$3.5cache$0.15write$1.7