Llama 4 Scout
Meta's efficient multimodal model with industry-leading 10M token context
Llama 4 Scout
Meta • April 2025
Training Data
Up to August 2024
Llama 4 Scout
April 2025
Parameters
109 billion (17B active)
Training Method
Mixture of Experts
Context Window
10,000,000 tokens
Knowledge Cutoff
August 2024
Key Features
Open Source • 10M Token Context • 16 Experts MoE
Capabilities
Context: Outstanding
Efficiency: Outstanding
Multimodal: Very Good
What's New in This Version
Industry-leading 10M token context window with efficient single-GPU deployment
Meta's efficient multimodal model with industry-leading 10M token context
What's New in This Version
Industry-leading 10M token context window with efficient single-GPU deployment
Technical Specifications
Key Features
Capabilities
Other Meta Models
Explore more models from Meta
Llama 4 Behemoth
Meta's flagship multimodal model (RESEARCH PREVIEW ONLY - weights not publicly released)
Llama 4 Maverick
Meta's balanced multimodal MoE model with 128 experts for general use
Llama 3.3 70B
Meta's latest improved Llama model with enhanced capabilities