Llama 3.2 Revolutionizing edge AI and vision with open, customizable models.
8 min readSep 26, 2024
AI Engineer by SAHAJ GODHANI Published 26th Sep 2024
Takeaways:
- Today, we’re releasing Llama 3.2, which includes small and medium-sized vision LLMs (11B and 90B), and lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices, including pre-trained and instruction-tuned versions.
- The Llama 3.2 1B and 3B models support a context length of 128K tokens. They are state-of-the-art in their class for on-device use cases like summarization, instruction following, and rewriting tasks running locally at the edge. These models are enabled on day one for Qualcomm and MediaTek hardware and optimized for Arm processors.
- Supported by a broad ecosystem, the Llama 3.2 11B and 90B vision models are drop-in replacements for their corresponding text model equivalents while exceeding image understanding tasks compared to closed models, such as Claude 3 Haiku. Unlike other open multimodal models, pre-trained and aligned models can be fine-tuned for custom touchtone applications and deployed locally using torch chat. They’re also available to try using our smart assistant, Meta AI.
- We’re sharing the first official Llama Stack distributions, which will greatly simplify the way developers work with Llama models…