TwinFlow

Self-Adversarial Flows for One-Step and Few-Step High-Quality Image Generation on Large Models
Last Updated: January 7, 2026
By Zelili AI

About This AI

TwinFlow is an open-source research framework accepted to ICLR 2026 for enabling high-fidelity one-step and few-step image generation on large-scale models without external discriminators or frozen teachers.

It uses a self-adversarial mechanism with twin trajectories in an extended time domain (t in [-1, 1]), where the negative time branch generates fake data to rectify the flow field through velocity matching.

This simple yet effective approach achieves distribution matching internally, transforming models into efficient few-step generators.

Key models include TwinFlow-Qwen-Image-v1.0 and experimental Z-Image-Turbo for faster inference.

It supports scalable full-parameter training (e.g., on 20B-parameter Qwen-Image) and inference in 1–4 NFEs with strong quality and diversity.

Compatible with Hugging Face Diffusers, it demonstrates superior performance on GenEval (0.83 at 1-NFE) and matches multi-step baselines at reduced compute.

The repo provides inference scripts, sampler configs, training code for SD3.5/OpenUni, and tutorials (e.g., MNIST).

Community integrations exist for ComfyUI workflows.

Built by LINs Lab and InclusionAI contributors, it’s licensed Apache-2.0 and focuses on simplifying generative pipelines for large models.

Key Features

  1. One-step and few-step generation: High-quality images in 1–4 NFEs without auxiliary networks
  2. Self-adversarial training: Internal twin trajectories for fake data generation and flow rectification
  3. Extended time domain: t in [-1, 1] to enable negative branch for adversarial signals
  4. Velocity matching loss: Minimizes difference between real and fake trajectory velocities
  5. Large-scale scalability: Full-parameter training on models like 20B Qwen-Image
  6. Diffusers compatibility: Easy inference with Hugging Face Diffusers library
  7. Custom sampler support: Configurable stochastic/extrapolation ratios and time controls
  8. Training code release: Available for SD3.5 and OpenUni under src directory
  9. ComfyUI integrations: Community workflows for visual node-based usage
  10. Tutorials and demos: MNIST example and inference scripts provided

Price Plans

  1. Free ($0): Fully open-source under Apache-2.0; download models/code and run locally at no cost

Pros

  1. Pipeline simplicity: No external discriminators or teachers required
  2. High efficiency: 1-NFE generation with quality rivaling multi-step methods
  3. Strong benchmarks: GenEval 0.83 at 1-NFE, matches 100-NFE baselines at low cost
  4. Open-source and scalable: Apache-2.0 license with training/inference code
  5. Community support: ComfyUI nodes and WeChat groups for discussion
  6. Research impact: Accepted to ICLR 2026, innovative self-adversarial approach

Cons

  1. Experimental nature: Research-focused; may require tuning for production use
  2. High compute for training: Full-parameter large-model training needs significant GPUs
  3. Limited pre-trained models: Few released checkpoints (Qwen-Image, Z-Turbo variants)
  4. Setup complexity: Requires Diffusers git install and custom sampler configs
  5. No hosted UI: Local/run-your-own only; no web demo or cloud service
  6. Early-stage repo: Ongoing updates, no formal releases yet

Use Cases

  1. Fast image generation research: Experiment with one/few-step diffusion alternatives
  2. Large-model distillation: Apply self-adversarial flows to scale efficient generators
  3. ComfyUI workflows: Integrate via community nodes for node-based creative pipelines
  4. High-diversity sampling: Achieve quality in minimal NFEs for resource-constrained inference
  5. Text-to-image acceleration: Speed up models like Qwen-Image or SD3.5 variants

Target Audience

  1. AI researchers: Studying few-step generation and flow matching
  2. Generative model developers: Building efficient large-scale diffusion models
  3. ComfyUI users: Seeking faster inference nodes for image gen
  4. Open-source contributors: Experimenting with self-adversarial techniques
  5. Academic labs: Reproducing ICLR 2026 results or extending the framework

How To Use

  1. Clone repo: git clone https://github.com/inclusionAI/TwinFlow
  2. Install dependencies: pip install git+https://github.com/huggingface/diffusers
  3. Download models: Get TwinFlow-Qwen-Image or Z-Image-Turbo from Hugging Face
  4. Run inference: python inference.py (configure sampler for 2-4 steps)
  5. Customize sampler: Edit sampling_steps, stochast_ratio, etc. in config
  6. Train own: Use src code for SD3.5/OpenUni (requires heavy compute)
  7. ComfyUI: Install community nodes like ComfyUI-TwinFlow for GUI workflow

How we rated TwinFlow

  • Performance: 4.7/5
  • Accuracy: 4.6/5
  • Features: 4.5/5
  • Cost-Efficiency: 5.0/5
  • Ease of Use: 4.0/5
  • Customization: 4.8/5
  • Data Privacy: 5.0/5
  • Support: 4.2/5
  • Integration: 4.4/5
  • Overall Score: 4.6/5

TwinFlow integration with other tools

  1. Hugging Face Diffusers: Native compatibility for inference and model loading
  2. ComfyUI Custom Nodes: Community implementations like ComfyUI-TwinFlow for node-based workflows
  3. Stable Diffusion Ecosystem: Works with SD3.5 and OpenUni-based pipelines
  4. Python Scripts: Direct inference.py and training code for custom setups
  5. Research Frameworks: Built on RCGM and UCGM repos for flow-matching extensions

Best prompts optimised for TwinFlow

  1. A cyberpunk cityscape at night with neon lights reflecting on wet streets, flying cars, detailed architecture, cinematic lighting, high resolution, 8k
  2. A serene Japanese garden in spring with cherry blossoms falling, koi pond, traditional pagoda, soft sunlight filtering through trees, photorealistic, ultra detailed
  3. Epic fantasy warrior princess riding a dragon over misty mountains at dawn, dramatic clouds, golden hour lighting, dynamic composition, masterpiece quality
  4. Minimalist product shot of a sleek modern smartphone on black background with subtle reflections, studio lighting, high detail, commercial photography style
  5. Surreal dreamscape of floating islands with waterfalls cascading into the sky, vibrant colors, magical atmosphere, intricate details, artstation trending
TwinFlow is an innovative open-source breakthrough for efficient one/few-step image generation, achieving high quality at 1-4 NFEs on large models without extra networks. Ideal for researchers pushing diffusion boundaries, though setup and compute needs suit advanced users. Strong potential with ComfyUI support and ICLR 2026 backing.

FAQs

  • What is TwinFlow?

    TwinFlow is an open-source framework (ICLR 2026) for training large-scale few-step image generators using self-adversarial flows, enabling 1-step high-quality generation without external discriminators.

  • When was TwinFlow released?

    The repo and paper appeared around December 2025, with acceptance to ICLR 2026; experimental models like Z-Image-Turbo released shortly after.

  • Is TwinFlow free to use?

    Yes, fully open-source under Apache-2.0 license with code, inference scripts, and models available on GitHub and Hugging Face at no cost.

  • How many steps does TwinFlow need for generation?

    It achieves high-quality results in 1 NFE (step), with 2-4 steps recommended for optimal diversity and fidelity.

  • What models does TwinFlow support?

    Key releases include TwinFlow-Qwen-Image-v1.0 and Z-Image-Turbo; compatible with SD3.5, OpenUni, and Qwen-Image bases.

  • Does TwinFlow work with ComfyUI?

    Yes, community custom nodes like ComfyUI-TwinFlow and others provide integration for node-based workflows.

  • Who created TwinFlow?

    Developed by researchers from LINs Lab and InclusionAI (Zhenglin Cheng, Peng Sun, Jianguo Li, Tao Lin).

  • What are TwinFlow’s main advantages?

    Simplifies pipeline (no teachers/discriminators), scales to large models, achieves 1-step generation with strong benchmarks like GenEval 0.83.

Newly Added Tools​

Qwen-Image-2.0

$0/Month

Qodo AI

$0/Month

Codiga

$10/Month

Tabnine

$59/Month
TwinFlow Alternatives

Qwen-Image-2.0

$0/Month

GLM-OCR

$0/Month

Lummi AI

$10/Month

About Author

Hi Guys! We are a group of ML Engineers by profession with years of experience exploring and building AI tools, LLMs, and generative technologies. We analyze new tools not just as a user, but as someone who understands their technical depth and real-world value.We know how overwhelming these tools can be for most people, that’s why we break down complex AI concepts into simple, practical insights. Our goal is to help you discover these magical AI tools that actually save your time and make everyday work smarter, not harder.“We don’t just write about AI: We build, test and simplify it for you.”