Pricing, Performance & Features Comparison
Llama 3.2-1B-Instruct is a multilingual large language model optimized for dialogue, retrieval, and summarization tasks. It is instruction-tuned with supervised fine-tuning and reinforcement learning from human feedback. The model supports multilingual text and code inputs and outputs.
The 'meta-llama/llama-3.2-11b-vision-instruct' model is optimized for visual recognition, image reasoning, captioning, and question answering about images. It extends the Llama 3.1 base with a vision adapter and cross-attention layers, and uses fine-tuning for alignment with human preferences. The model supports multiple languages for text-only tasks and English for image-text applications.