kimi-latest-128k vs grok-4-0709
Pricing, Performance & Features Comparison
Kimi-latest-128k refers to the Kimi K2 model, a state-of-the-art Mixture-of-Experts (MoE) language model with 32 billion activated and 1 trillion total parameters. It features a 128K context length and is meticulously optimized for agentic capabilities, specifically designed for tool use, reasoning, and autonomous problem-solving.
Input$2
Output$5
Latency (p50)-
Output Limit128K
Function Calling
JSON Mode
-
InputText, Image, Audio, Video
OutputText, Audio
in$2out$5cache$0.15-
Success Rate (24h)
Our latest and greatest flagship model, offering unparalleled performance in natural language, math and reasoning - the perfect jack of all trades.
Input$3
Output$15
Latency (p50)26.1s
Output Limit256K
Function Calling
JSON Mode
-
InputText
OutputText
in$3out$15cache$0.75-