glm-5.1 vs deepseek-v4-pro
Pricing, Performance & Features Comparison
Post-training upgrade to GLM-5. Mixture-of-Experts model with 744B total parameters and 40B activated per token. Trained on Huawei Ascend 910B chips with enhanced RL for agentic capabilities.
Input$1.4
Output$4.4
Latency (p50)6.1s
Output Limit131K
Function Calling
JSON Mode
InputText
OutputText
in$1.4out$4.4cache$0.26write$1.4
Latency (24h)
Success Rate (24h)
Context Length1M
Reasoning
Providers1
ReleasedApr 2026
Knowledge Cutoff-
LicenseMIT License
Flagship Mixture-of-Experts model with 1.6T total parameters and 49B activated per token. Trained on 32T+ tokens with hybrid attention for efficient 1M context processing.
Input$1.7
Output$3.5
Latency (p50)4.4s
Output Limit384K
Function Calling
JSON Mode
-
InputText
OutputText
in$1.7out$3.5cache$0.15write$1.7