StableVideo: Text-driven Consistency-aware Diffusion Video Editing

被引:0
作者
Chai, Wenhao [1 ]
Guo, Xun [2 ]
Wang, Gaoang [1 ]
Lu, Yan [2 ]
机构
[1] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
[2] Microsoft Res Asia, Beijing, Peoples R China
来源
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023) | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Diffusion-based methods can generate realistic images and videos, but they struggle to edit existing objects in a video while preserving their appearance over time. This prevents diffusion models from being applied to natural video editing in practical scenarios. In this paper, we tackle this problem by introducing temporal dependency to existing text-driven diffusion models, which allows them to generate consistent appearance for the edited objects. Specifically, we develop a novel inter-frame propagation mechanism for diffusion video editing, which leverages the concept of layered representations to propagate the appearance information from one frame to the next. We then build up a text-driven video editing framework based on this mechanism, namely StableVideo, which can achieve consistencyaware video editing. Extensive experiments demonstrate the strong editing capability of our approach. Compared with state-of-the-art video editing methods, our approach shows superior qualitative and quantitative results.
引用
收藏
页码:22983 / 22993
页数:11
相关论文
共 55 条
[51]   Perceiving and Modeling Density for Image Dehazing [J].
Ye, Tian ;
Zhang, Yunchen ;
Jiang, Mingchao ;
Chen, Liang ;
Liu, Yun ;
Chen, Sixiang ;
Chen, Erkang .
COMPUTER VISION, ECCV 2022, PT XIX, 2022, 13679 :130-145
[52]  
Yu Lijun, 2022, ARXIV221205199
[53]  
Yu Sihyun, 2023, ARXIV230207685
[54]  
Zhang Lvmin, 2023, arXiv preprint arXiv:2302.05543
[55]  
Zhou Daquan, 2022, ARXIV221111018, P16