Mixtral 8x7B

Mistral AI Released December 2023

Mistral's efficient mixture-of-experts model

Mixtral 8x7B

Mistral AIDecember 2023

Training Data

Up to early 2023

Mixtral 8x7B

December 2023

Parameters

8x7B (MoE)

Training Method

Mixture of Experts

Context Window

32,000 tokens

Knowledge Cutoff

September 2023

Key Features

Mixture of Experts • High Efficiency • Open Weights

Capabilities

Reasoning: Good

Coding: Very Good

Efficiency: Excellent

What's New in This Version

Efficient sparse activation with strong performance

Mistral's efficient mixture-of-experts model

What's New in This Version

Efficient sparse activation with strong performance

Technical Specifications

Parameters 8x7B (MoE)
Context Window 32,000 tokens
Training Method Mixture of Experts
Knowledge Cutoff September 2023
Training Data Up to early 2023

Key Features

Mixture of Experts High Efficiency Open Weights

Capabilities

Reasoning: Good
Coding: Very Good
Efficiency: Excellent

Other Mistral AI Models

Explore more models from Mistral AI