Zelili AI

GLM-130B

Open Bilingual 130B Parameter LLM for English and Chinese Tasks
Founder: Jie Tang
Tool Release Date
Oct 2022
Tool Users
100K+
Pricing Model

Starting Price

$0/Month

About This AI

GLM-130B is an open-source bidirectional dense language model with 130 billion parameters, pre-trained using the General Language Model (GLM) algorithm.

Developed by Tsinghua University’s KEG lab and collaborators, it excels in bilingual (English & Chinese) understanding and generation, supports efficient inference on limited hardware like a single 8x A100 server or quantized on 4x RTX 3090 GPUs, and performs strongly on downstream NLP tasks with features like low-bit quantization for accessibility.

Pricing

Pricing Model

Starting Price

$0/Month

Key Features

  1. 130 billion parameters bidirectional dense model
  2. Bilingual pre training on 400B+ tokens (English & Chinese)
  3. Efficient inference on single 8x A100 or quantized 4x RTX 3090
  4. Supports blank filling and left to right generation
  5. Strong zero shot and few shot performance on diverse NLP tasks

Pros

  1. Fully open source with weights and code available
  2. Exceptional hardware efficiency for a 100B+ model
  3. Balanced strong performance in both English and Chinese
  4. No performance degradation with INT4/INT8 quantization
  5. Pioneering accessible large scale bilingual LLM

Cons

  1. Demo space currently inactive due to storage limits
  2. Requires significant GPU resources even quantized
  3. Older model compared to modern LLMs like Llama 3 or GLM-4
  4. Limited to text based tasks without multimodal support
  5. Potential outdated performance on latest benchmarks
GLM-130B remains a landmark open source release for bilingual large language models, ideal for researchers and developers interested in efficient inference of 100B scale models or historical context in Chinese LLM development.

FAQs

  • What is GLM-130B?

    GLM-130B is a 130 billion parameter open bilingual (English & Chinese) large language model developed by Tsinghua KEG, released in 2022 as one of the earliest accessible 100B+ scale dense LLMs.

  • Is the Hugging Face demo for GLM-130B working?

    No, the space at zai org/GLM-130B currently shows a runtime error due to storage limits exceeded; the original demo was hosted elsewhere, but the model weights remain downloadable from GitHub or Hugging Face repos.

  • How can I run GLM-130B today?

    You can download the model from the official GitHub (zai org/GLM-130B) and run inference on multi GPU setups (e.g., 8x A100 for full precision or 4x RTX 3090 with INT4 quantization) using the provided code.

  • Is GLM-130B still relevant in 2026?

    While groundbreaking in 2022 for its efficiency and openness, newer models like GLM-4 series or Llama 3 offer superior performance; it’s mainly of historical/research interest now for bilingual or low resource large model studies.

GLM-130B Alternatives

GlobalGPT

GravityWrite

Undetectable AI

Storynest AI

Newly Added

Autodraft AI

GlimpRouter

GLM-130B Latest News

Weekly Poll

GLM-130B Review

0.0
0.0 out of 5 stars (based on 0 reviews)
Excellent0%
Very good0%
Average0%
Poor0%
Terrible0%

There are no reviews yet. Be the first one to write one.

Newly Added Tools

Autodraft AI

GlimpRouter

Flux.2 Dev Turbo

GLM-Image