Zelili AI

OpenAI Secret Image Models Leaked: Chestnut and Hazelnut Spotted in the Wild

OpenAI Secret Image Models Leaked

While the AI world was busy debating GPT 5.2 reasoning scores, I noticed a massive leak surfacing from the visual side of OpenAI. Independent testers on the popular benchmarking platforms Design Arena and LM Arena have spotted two mysterious new models codenamed Chestnut and Hazelnut.

Early tests suggest these are the successors to DALL E 3, and they represent a fundamental shift in OpenAI philosophy: the “safety rails” seem to be coming off, and “world knowledge” is getting plugged in.

I analyzed the leaked outputs and cross referenced them with community reports. Here is what I found about the models likely to launch alongside GPT 5.2.

The Research First Image Generator

Chestnut

The most groundbreaking feature isn’t just better pixels; it is intelligence. Unlike previous models that just dreamed up an image based on your prompt, Chestnut and Hazelnut appear to research facts before generating.

  • The World Knowledge Test: I saw examples where the model was asked to generate specific technical items, like a PlayStation controller or a whiteboard with code. The models didn’t just guess. They rendered accurate, readable JSON code directly onto the image textures.
  • Why this matters: This implies to me that the model isn’t just an image generator. It likely has access to the same reasoning or search backend as GPT 5.2, allowing it to verify data like correct syntax or historical facts before drawing them.

Also Read: GPT 5.2 Review: Why the Backlash Despite Record-Breaking Benchmarks?

The End of the Celebrity Ban?

For years, I have known OpenAI to be the strictest lab regarding public figures, flatly refusing to generate images of real people. These leaks suggest a major policy reversal.

Testers shared generated images of celebrities like Jack Black and Paul Rudd that were startlingly photorealistic.

  • Visual Fidelity: To my eyes, the plastic AI look is mostly gone. Skin textures, lighting, and eye reflections are approaching indistinguishable levels of realism.
  • The Tell: While convincing, I still spotted subtle artifacts, like slightly odd proportions on teeth or dead eyes, that flag the images as AI to experts. But to the casual observer? They are passable.

Competitor Check: Chasing the Banana

Hazelnut

The timing of this leak is no accident. Google recently launched Nano Banana Pro, which has dominated the leaderboards with its ability to render text and understand complex layouts.

  • Google Nano Banana Pro: Currently the industry leader for world knowledge in images, like correct maps and diagrams.
  • OpenAI Chestnut and Hazelnut: This appears to be a direct response, matching Google capability to render text (like the “Advancing AI for humanity” whiteboard example) and complex object interactions.

Feature Breakdown: What is New?

FeatureOld DALL E 3 / GPT Image 1New Chestnut / Hazelnut Leak
Text RenderingOften gibberish or misspelledHigh Accuracy (renders code, JSON, logos)
CelebritiesHard refusal (Policy block)Allowed (High photorealism)
ReasoningPurely visual generationResearch backed (verifies facts/data).
Texture QualitySmooth, digital art lookHyper realistic (imperfections, grain)

What is Next?

Rumors suggest these models will drop officially with the broader GPT 5.2 rollout. If the Hazelnut model is indeed the Pro version, creators are about to get a tool that finally bridges the gap between making art and rendering data.

The ability to write code inside an image might sound niche, but for developers needing UI mockups or educators needing technical diagrams, I think it is a killer feature. I will be watching the Design Arena closely for the official reveal.