GLM-4.7
Z.ai's flagship model with industry-leading coding and multi-step task handling
GLM-4.7
Z.ai • December 2025
Training Data
Up to late 2025
GLM-4.7
December 2025
Parameters
~400 billion
Training Method
Advanced Pre-training
Context Window
200,000 tokens
Knowledge Cutoff
November 2025
Key Features
Outstanding Coding • Long Output • Multi-step Tasks • Tool Use
Capabilities
Coding: Outstanding (73.8% SWE-bench)
Math: Outstanding (95.7% AIME)
Reasoning: Excellent
What's New in This Version
Highest SWE-bench score among open-source models, 128K max output
Z.ai's flagship model with industry-leading coding and multi-step task handling
What's New in This Version
Highest SWE-bench score among open-source models, 128K max output
Technical Specifications
Key Features
Capabilities
Other Z.ai Models
Explore more models from Z.ai
GLM-5.1
Zhipu AI's current flagship refined from GLM-5 for agentic engineering, topping the SWE-Bench Pro leaderboard with sustained 8-hour autonomous execution
GLM-5
Zhipu AI's open-weight 744B MoE foundation model purpose-built for agentic engineering, trained entirely on Huawei Ascend chips
GLM-4.6V
Open-source vision-language model optimized for multimodal reasoning and frontend automation