GLM-4.6V
Open-source vision-language model optimized for multimodal reasoning and frontend automation
GLM-4.6V
Z.ai • December 2025
Training Data
Up to late 2025
GLM-4.6V
December 2025
Parameters
106 billion
Training Method
Vision-Language Pre-training
Context Window
128,000 tokens
Knowledge Cutoff
November 2025
Key Features
Vision-Language • Frontend Automation • Tool Calling • Open Source
Capabilities
Vision: Excellent
Multimodal: Outstanding
Tool Use: Excellent
What's New in This Version
Native tool-calling vision model for production multimodal applications
Open-source vision-language model optimized for multimodal reasoning and frontend automation
What's New in This Version
Native tool-calling vision model for production multimodal applications
Technical Specifications
Key Features
Capabilities
Other Z.ai Models
Explore more models from Z.ai
GLM-4.7
Z.ai's flagship model with industry-leading coding and multi-step task handling
GLM-4.5
Z.ai's general-purpose flagship trained on 22 trillion tokens
GLM-4.5V
Vision-language model compatible with Huawei Ascend processors