llama-3.2-1b-instruct vs llama-3.2-11b-vision-instruct
Pricing, Performance & Features Comparison
Llama 3.2-1B-Instruct is a multilingual large language model optimized for dialogue, retrieval, and summarization tasks. It is instruction-tuned with supervised fine-tuning and reinforcement learning from human feedback. The model supports multilingual text and code inputs and outputs.
Input$0.01
Output$0.02
Latency (p50)-
Output Limit4K
Function Calling
JSON Mode
-
InputText
OutputText
in$0.01out$0.02--
The 'meta-llama/llama-3.2-11b-vision-instruct' model is optimized for visual recognition, image reasoning, captioning, and question answering about images. It extends the Llama 3.1 base with a vision adapter and cross-attention layers, and uses fine-tuning for alignment with human preferences. The model supports multiple languages for text-only tasks and English for image-text applications.
Input$0.00
Output$0.00
Latency (p50)-
Output Limit4K
Function Calling
JSON Mode
-
InputText, Image
OutputText
in$0.00out$0.00--