Qwen3-235B-A22B
Alibaba's flagship open-source MoE model with 235B total parameters and 22B active
Qwen3-235B-A22B
Qwen • April 2025
Training Data
Up to early 2025
Qwen3-235B-A22B
April 2025
Parameters
235B (22B active)
Training Method
Mixture of Experts
Context Window
128,000 tokens
Knowledge Cutoff
March 2025
Key Features
Open Source • MoE Architecture • Efficient Inference • Strong Reasoning
Capabilities
Reasoning: Outstanding
Coding: Excellent
Multilingual: Outstanding
What's New in This Version
Flagship open-source MoE rivaling much larger proprietary models with efficient 22B active parameters
Alibaba's flagship open-source MoE model with 235B total parameters and 22B active
What's New in This Version
Flagship open-source MoE rivaling much larger proprietary models with efficient 22B active parameters
Technical Specifications
Key Features
Capabilities
Other Qwen Models
Explore more models from Qwen
Qwen3.6-Plus
Alibaba's flagship agentic AI model with hybrid linear attention, always-on reasoning, and autonomous multi-step coding workflows
Qwen3.5-Plus
Alibaba's hosted flagship combining hybrid linear-attention MoE with native multimodal understanding for agentic workflows across 201 languages
Qwen3-Max
Alibaba's flagship model with over 1 trillion parameters and exceptional reasoning