Mixtral 8x7B
Mistral AI Released December 2023
Mistral's efficient mixture-of-experts model
Mixtral 8x7B
Mistral AI • December 2023
Training Data
Up to early 2023
Mixtral 8x7B
December 2023
Parameters
8x7B (MoE)
Training Method
Mixture of Experts
Context Window
32,000 tokens
Knowledge Cutoff
September 2023
Key Features
Mixture of Experts • High Efficiency • Open Weights
Capabilities
Reasoning: Good
Coding: Very Good
Efficiency: Excellent
What's New in This Version
Efficient sparse activation with strong performance
Mistral's efficient mixture-of-experts model
What's New in This Version
Efficient sparse activation with strong performance
Technical Specifications
Parameters 8x7B (MoE)
Context Window 32,000 tokens
Training Method Mixture of Experts
Knowledge Cutoff September 2023
Training Data Up to early 2023
Key Features
Mixture of Experts High Efficiency Open Weights
Capabilities
Reasoning: Good
Coding: Very Good
Efficiency: Excellent
Other Mistral AI Models
Explore more models from Mistral AI
Mistral Large 3
Mistral's state-of-the-art open-weight frontier model with multimodal and multilingual capabilities under Apache 2.0
December 2025 675 billion (41B active)
Ministral 3 14B
Mistral's high-performance dense model in the new Ministral 3 family
December 2025 14 billion
Ministral 3 8B
Mistral's efficient edge-ready model for drones, cars, robots, phones and laptops
December 2025 8 billion