Qwen3-30B-A3B
Qwen Released April 2025
Efficient MoE model with 30B total but only 3B active parameters
Qwen3-30B-A3B
Qwen • April 2025
Training Data
Up to early 2025
Qwen3-30B-A3B
April 2025
Parameters
30B (3B active)
Training Method
Mixture of Experts
Context Window
128,000 tokens
Knowledge Cutoff
March 2025
Key Features
MoE Architecture • Efficient Inference • Cost Effective • Fast Response
Capabilities
Efficiency: Outstanding
Speed: Excellent
Reasoning: Very Good
What's New in This Version
Excellent performance-to-cost ratio with sparse activation
Efficient MoE model with 30B total but only 3B active parameters
What's New in This Version
Excellent performance-to-cost ratio with sparse activation
Technical Specifications
Parameters 30B (3B active)
Context Window 128,000 tokens
Training Method Mixture of Experts
Knowledge Cutoff March 2025
Training Data Up to early 2025
Key Features
MoE Architecture Efficient Inference Cost Effective Fast Response
Capabilities
Efficiency: Outstanding
Speed: Excellent
Reasoning: Very Good
Other Qwen Models
Explore more models from Qwen
Qwen3-Max
Alibaba's flagship model with over 1 trillion parameters and exceptional reasoning
September 2025 1T+
QwQ-32B
Reasoning-focused model with extended thinking capabilities
March 2025 32 billion
Qwen2.5-72B-Instruct
Alibaba's instruction-tuned flagship open-source model
September 2024 72 billion