Zelili AI

Why Z.ai’s GLM-4.7 Matters More Than Most Open-Source AI Releases?

GLM-4.7

Z.ai, an AI company from Beijing, has thrown its hat in the ring in a big way when it comes to open source artificial intelligence.

The company made available GLM-4.7 on December 22, an enormous open-weights, 358-billion-parameter model that took immediate first place in a number of crucial benchmarks.

This release reinforces China’s increasing influence and ability to develop widely available, cutting-edge AI.

GLM-4.7 now serves as an enabler and power tool for developers and researchers alike, decentrally providing best-in-class performance in the areas of coding, logical reasoning, and autonomous agent tasks without being confined to closed platforms.

Also Read: MiniMax M2.1 Shows Why Open-Source AI Is Getting Dangerous for Closed Coding Models

Shattering Open-Source Benchmarks

GLM-4.7 Benchmarks

According to the draft release notes, GLM-4.7 becomes the new state of the art among open-source models in most, if not all, critical performance measures.

  • Coding Capability: It scored a whopping 73.8% on the SWE-bench Verified benchmark, proving its outstanding performance in real-world software engineering problem solving.
  • Mastery of Reasoning: The model achieved 85.7% on the GPQA-Diamond benchmark, given critical and science reasoning tests.
  • Agentic Capabilities: For tasks that are pipelined with other external tools, it earned 42.8% on the HLE benchmark.

In addition to this, as a follow-up to web development, GLM-4.7 is the top-rated open-source model on LMArena’s WebDev leaderboard with a score of 1449.

Speed, Scale, and Accessibility

Beyond raw benchmark scores, GLM-4.7 is engineered for practical, long-wearing performance. It implements a generous 128K token context window, which means it can work with massive documents or codebases in one prompt.

It is already being praised by early users for its remarkable speed. A complete, playable Space Invaders with sound effects was written in the time it takes to produce 16 tokens per second using an Apple M3 Ultra Mac.

The model has been widely released through various channels by Z.ai. Pretrained weights can be downloaded on Hugging Face, and an API is available to use them. Additionally, Z.ai launched a very competitive new coding plan that costs only $3 per month.