BEIJING, Feb. 13 – China’s Zhipu AI released its GLM-5 open-source model, saying the system is built for long-running agent tasks and stronger coding performance as competition intensifies in frontier AI. Reuters reported the launch on Wednesday.
According to the project’s model card, GLM-5 scales to 744 billion parameters with 40 billion active and was trained on 28.5 trillion tokens, using DeepSeek Sparse Attention to reduce inference costs while keeping long-context capability. Hugging Face hosts the weights and technical details.
Benchmarks and positioning
Zhipu says GLM-5’s coding results are comparable to Anthropic’s Claude Opus 4.5 and surpass Google’s Gemini 3 Pro on some tests, according to its release cited by Reuters.
The benchmark table highlighted by The Decoder shows a 77.8% score on SWE-bench Verified and notes the model weights are published under an MIT license on Hugging Face for broad developer access.
Chips and self-sufficiency
Zhipu said GLM-5 runs inference on domestically made chips, including Huawei’s Ascend line, reflecting China’s push to build advanced AI stacks without U.S. hardware. Reuters reported those details from the company’s statement.
Coverage by WinBuzzer adds that the model is trained on Huawei chips and powers the Z.ai chat platform, underscoring the company’s emphasis on domestic compute and rapid developer adoption.
Competition and adoption
The release lands amid a burst of new models from Chinese rivals ahead of the Lunar New Year, including recent launches from MiniMax and ByteDance, Reuters noted.
StartupNews.fyi said the open-source strategy positions GLM-5 to compete with Gemini and Claude while expanding adoption for enterprises that prefer downloadable weights over proprietary APIs. StartupNews.fyi framed the move as part of a broader global shift toward open weights.
Even so, The Decoder cautioned that benchmark gains do not always translate directly into real-world performance, though the release narrows the gap with leading Western models and raises expectations for open-weight alternatives.
How we report: We select the day’s most important stories, confirm facts across multiple reputable sources, and avoid anonymous sourcing. Our goal is clear, balanced coverage you can trust-because transparency and verification matter for informed readers.
Image Attribution ▾
Server room in CERN (Switzerland) – Florian Hirzinger, CC BY-SA 3.0 (https://creativecommons.org/licenses/by-sa/3.0), via Wikimedia Commons (https://commons.wikimedia.org/wiki/File:CERN_Server_03.jpg). Cropped to 16:9 and resized.