Zelili AI

Grok 5 Could Jump to 7 Trillion Parameters After Jensen Huang’s CES 2026 Hint

NVIDIA CEO Jensen Huang has given one of the clearest indications yet for xAI’s next frontier model, sharing at CES 2026 that Grok 5 is targeting a massive 7 trillion parameter scale.

Grok 5

NVIDIA CEO Jensen Huang has given one of the clearest indications yet for xAI’s next frontier model as he shared that Grok 5 aims to have a 7 trillion parameter target.

This is more than double the estimated 3 trillion parameters for Grok 4 and one of the most aggressive scaling jumps in frontier AI development.

Huang shared the comment in his recent CES 2026 keynote, referring to it as “the next frontier model” and saying how the new Vera Rubin GPU architecture NVIDIA is building will be able to handle these kinds of huge training runs.

With a fixed one-month training window, Rubin provides about 100x the factory throughput per watt compared to previous Hopper systems essential for power-strained data centers that add up to billions in cost.

Read More: Viral Meme Puts GlobalGPT in Spotlight as a Low-Cost Alternative to Premium AI Subscriptions

Why Parameters Matter in the AI Arms Race

One of the primary predictors for model performance is parameter count, particularly in tasks like reasoning and context handling, as well as multi-modal abilities.

Here’s how the scaling of Grok compares to recent landmarks:

  • Grok 1 → 314B parameters (2023 baseline)
  • Grok 3/4 → ~3 trillion parameters (edge of what’s possible now)
  • Grok 5 → 7 trillion parameters (targeted)

This leap puts Grok 5 on pace to be potentially the largest publicly discussed model yet.

In terms of estimated scaled-order size, it surpasses speculative scales from the likes of competitors like GPT-4 (estimated ~1.7T) and even some next-gen rollouts from OpenAI and Anthropic.

The implications are profound:

  • More consistent, coherent reasoning across complex tasks
  • Expanded multimodal abilities (text + vision + potentially longer video/context)
  • Higher intelligence density, better real-world understanding capability

That’s no surprise, given the rapid advances in xAI’s Colossus superclusters and huge GPU deployments currently being expanded in Memphis and beyond.