Skip to main content
Glama

devstral-medium-2507 vs llama-3.1-8b-instruct

Pricing, Performance & Features Comparison

Price unit:
Authormistral
Context Length128K
Reasoning
-
Providers1
ReleasedJul 2024
Knowledge Cutoff-
License-

Devstral Medium 2507 is a high-performance, code-centric large language model designed for agentic coding capabilities and enterprise use. It features a 128k token context window and achieves a 61.6% score on SWE-Bench Verified, outperforming several commercial models like Gemini 2.5 Pro and GPT-4.1. The model excels at code generation, multi-file editing, and powering software engineering agents with structured outputs and tool integration.

Input$0.4
Output$2
Latency (p50)760ms
Output Limit128K
Function Calling
JSON Mode
InputText
OutputText
in$0.4out$2--
Latency (24h)
Success Rate (24h)
Authormeta
Context Length128K
Reasoning
-
Providers2
ReleasedJul 2024
Knowledge CutoffDec 2023
License-

Llama 3.1-8B-Instruct is an auto-regressive language model optimized for multilingual dialogue and instruction-following tasks. It employs supervised fine-tuning and reinforcement learning with human feedback to align with human preferences. The model supports a 128k token context and is suitable for generating text and code in multiple languages.

Input$0.02
Output$0.05
Latency (p50)-
Output Limit4K
Function Calling
JSON Mode
-
InputText
OutputText
deepinfra
Cheapest
in$0.02out$0.05--
in$0.1out$0.1--