What is GLM-130B?
GLM-130B is a 130 billion parameter open bilingual (English & Chinese) large language model developed by Tsinghua KEG, released in 2022 as one of the earliest accessible 100B+ scale dense LLMs.
Is the Hugging Face demo for GLM-130B working?
No, the space at zai org/GLM-130B currently shows a runtime error due to storage limits exceeded; the original demo was hosted elsewhere, but the model weights remain downloadable from GitHub or Hugging Face repos.
How can I run GLM-130B today?
You can download the model from the official GitHub (zai org/GLM-130B) and run inference on multi GPU setups (e.g., 8x A100 for full precision or 4x RTX 3090 with INT4 quantization) using the provided code.
Is GLM-130B still relevant in 2026?
While groundbreaking in 2022 for its efficiency and openness, newer models like GLM-4 series or Llama 3 offer superior performance; it’s mainly of historical/research interest now for bilingual or low resource large model studies.







