Mixtral 8x22B
Mistral AI Released April 2024
Mistral's largest mixture-of-experts model
Mixtral 8x22B
Mistral AI • April 2024
Training Data
Up to early 2024
Mixtral 8x22B
April 2024
Parameters
8x22B (MoE)
Training Method
Mixture of Experts
Context Window
64,000 tokens
Knowledge Cutoff
February 2024
Key Features
Large MoE • Extended Context • High Performance
Capabilities
Reasoning: Very Good
Coding: Very Good
Complex Tasks: Good
What's New in This Version
Larger experts with extended context window
Mistral's largest mixture-of-experts model
What's New in This Version
Larger experts with extended context window
Technical Specifications
Parameters 8x22B (MoE)
Context Window 64,000 tokens
Training Method Mixture of Experts
Knowledge Cutoff February 2024
Training Data Up to early 2024
Key Features
Large MoE Extended Context High Performance
Capabilities
Reasoning: Very Good
Coding: Very Good
Complex Tasks: Good
Other Mistral AI Models
Explore more models from Mistral AI
Mistral Large 3
Mistral's state-of-the-art open-weight frontier model with multimodal and multilingual capabilities under Apache 2.0
December 2025 675 billion (41B active)
Ministral 3 14B
Mistral's high-performance dense model in the new Ministral 3 family
December 2025 14 billion
Ministral 3 8B
Mistral's efficient edge-ready model for drones, cars, robots, phones and laptops
December 2025 8 billion