What is V-RGBX?
V-RGBX is the first end-to-end framework for intrinsic-aware video editing, decomposing videos into albedo, normals, materials, irradiance, and enabling precise keyframe-based edits with temporal consistency.
When was V-RGBX released?
The paper was published December 12, 2025; model weights and inference code were released on January 15, 2026.
Is V-RGBX free and open-source?
Yes, fully open-source with model weights and code available on GitHub (Aleafy/V-RGBX) under permissive license; no usage fees.
What are the main capabilities of V-RGBX?
It supports video inverse rendering, photorealistic synthesis, and keyframe editing conditioned on intrinsic channels for object appearance changes and scene relighting.
Who developed V-RGBX?
Developed by researchers including Ye Fang, Tong Wu, and others from Adobe Research and collaborators.
What hardware is needed for V-RGBX?
Requires powerful GPU for inference due to complex rendering/synthesis pipelines; local deployment via GitHub repo.
Is there a demo or hosted version of V-RGBX?
No hosted web demo mentioned; users must set up and run locally using the released code and weights.
What applications is V-RGBX best for?
Ideal for VFX, film post-production, precise relighting, object editing in videos, and research in intrinsic-aware AI editing.




