What is EgoEdit?
EgoEdit is a research framework from Snap Research for real-time, instruction-guided editing of egocentric (first-person) videos, enabling interactive AR applications with object manipulation and style transfer.
When was EgoEdit announced?
The EgoEdit paper (arXiv 2512.06065) was published on December 5, 2025, with dataset and benchmark release planned soon after.
Is EgoEdit open-source or free?
It’s a research project; dataset (EgoEditData) and benchmark (EgoEditBench) are planned for public release, but model code/demo availability is not yet confirmed as of early 2026.
What makes EgoEdit unique?
It specializes in egocentric videos, handling rapid motion, hand occlusions, and interactions for real-time AR editing on a single GPU with low latency.
What hardware does EgoEdit require?
It runs in real time on a single H100 GPU with 855ms first-frame latency and 38.1 FPS streaming performance.
What are EgoEdit’s main capabilities?
Object morphing/substitution, addition/removal, scene replacement, style transfer (e.g., ukiyo-e), depth maps, and complex instruction following in first-person views.
Who developed EgoEdit?
Led by Snap Research with collaborators from Rice University and University of Oxford, including authors like Runjia Li and Sergey Tulyakov.
What is EgoEditBench?
A comprehensive benchmark for evaluating egocentric video editing systems, used to compare EgoEdit against baselines like Senorita and InsV2V.




