Skip to main content
Glama

pixtral-12b vs ministral-8b-2410

Pricing, Performance & Features Comparison

Price unit:
Authormistral
Context Length128K
Reasoning
-
Providers1
ReleasedSep 2024
Knowledge Cutoff-
License-

Pixtral-12B is a natively multimodal large language model with 12 billion parameters plus a 400 million parameter vision encoder, trained with interleaved image and text data. It achieves strong performance on multimodal tasks including instruction following, while maintaining state-of-the-art performance on text-only benchmarks without compromising key text capabilities. The model supports variable image sizes and can process multiple images within its 128K token context window.

Input$0.15
Output$0.15
Latency (p50)710ms
Output Limit128K
Function Calling
JSON Mode
InputText, Image
OutputText
in$0.15out$0.15--
Latency (24h)
Success Rate (24h)
Authormistral
Context Length128K
Reasoning
-
Providers1
ReleasedSep 2024
Knowledge CutoffOct 2023
License-

Ministral-8B-Instruct-2410 is an instruction-tuned language model built on Mistral’s 8B-parameter dense transformer architecture. It supports large context windows (up to 128k tokens) and is particularly strong in multilingual applications, code-related tasks, and chat-based interactions. Its design targets efficient on-device and edge computing scenarios with high performance at scale.

Input$0.1
Output$0.1
Latency (p50)676ms
Output Limit128K
Function Calling
JSON Mode
-
InputText
OutputText
in$0.1out$0.1--
Latency (24h)
Success Rate (24h)