Meta releases Llama 4 Maverick & Scout on Hugging Face Hub
AI Impact Summary
Meta has released Llama 4 Maverick and Scout, two Mixture-of-Experts (MoE) large language models available on Hugging Face Hub. These models, with 17B active parameters, offer native multimodality and support for context lengths up to 10 million tokens, leveraging techniques like NoPE attention and temperature tuning to achieve this scale. Deployment options include BF16 and FP8 quantization for Maverick and int4 quantization for Scout, enabling efficient use on diverse hardware.
Affected Systems
- Date
- Date not specified
- Change type
- capability
- Severity
- medium