Skip to main content
Glama

glm-4.7-flash vs gpt-5.1-2025-11-13

Pricing, Performance & Features Comparison

Price unit:
Authorzai
Context Length200K
Reasoning
-
Providers1
ReleasedJan 2026
Knowledge Cutoff-
License-

GLM-4.7-Flash is a 30B Mixture-of-Experts (MoE) reasoning model with approximately 3.6B active parameters, designed for local deployment with best-in-class performance for coding, agentic workflows, and chat. It supports a 200K context window and achieves open-source state-of-the-art scores on benchmarks like SWE-bench Verified and τ²-Bench, excelling particularly in frontend and backend development capabilities.

Input$0.07
Output$0.4
Latency (p50)8.9s
Output Limit131K
Function Calling
JSON Mode
-
InputText
OutputText
in$0.07out$0.4-write$0.01
Latency (24h)
Success Rate (24h)
Authoropenai
Context Length400K
Reasoning
-
Providers1
ReleasedNov 2025
Knowledge Cutoff-
License-

The best model for coding and agentic tasks with configurable reasoning effort.

Input$1.3
Output$10
Latency (p50)1.9s
Output Limit128K
Function Calling
JSON Mode
-
Input-
Output-
in$1.3out$10cache$0.13-
Latency (24h)
Success Rate (24h)