MV-Diffusion: Motion-aware Video Diffusion Model

被引:7
作者
Deng, Zijun [1 ,2 ]
He, Xiangteng [1 ,2 ]
Peng, Yuxin [1 ,2 ]
Zhu, Xiongwei [3 ]
Cheng, Lele [3 ]
机构
[1] Peking Univ, Wangxuan Inst Comp Technol, Beijing, Peoples R China
[2] Peking Univ, Natl Key Lab Multimedia Informat Proc, Beijing, Peoples R China
[3] Kuaishou Technol, Beijing, Peoples R China
来源
PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023 | 2023年
基金
中国国家自然科学基金;
关键词
Autoregressive video diffusion; Video generation; Motion capture;
D O I
10.1145/3581783.3612405
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this paper, we present a Motion-aware Video Diffusion Model (MV-Diffusion) for enhancing the temporal consistency of generated videos using autoregressive diffusion models. Despite the success of diffusion models in various vision generation tasks, generating high-quality and realistic videos with coherent temporal structure remains a challenging problem. Current methods have primarily focused on capturing implicit motion features within a restricted window of RGB frames, rather than explicitly modeling the motion. To address this, we focus on improving the temporal modeling ability of the current autoregressive video diffusion approach by leveraging rich temporal trajectory information in a global context and explicitly modeling local motion trends. The main contributions of this research include: (1) a Trajectory Modeling (TM) block that enhances the model's conditioning by incorporating global motion trajectory information, (2) a Motion Trend Attention (MTA) block that utilizes a cross-attention mechanism to explicitly infer motion trends from the optical flow rather than implicitly learning from RGB input. Experimental results on three video generation tasks using four datasets show the effectiveness of our proposed MV-Diffusion, outperforming existing state-of-the-art approaches. The code is available at https://github.com/PKU-ICST- MIPL/MVDiffusion_ACMMM2023.
引用
收藏
页码:7255 / 7263
页数:9
相关论文
共 42 条
[1]  
Akan A. K., 2021, P IEEECVF INT C COMP, P14728
[2]  
Babaeizadeh M., 2018, INT C LEARN REPR
[3]   Improved Conditional VRNNs for Video Prediction [J].
Castrejon, Lluis ;
Ballas, Nicolas ;
Courville, Aaron .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :7607-7616
[4]  
Choromanski K. M., 2021, Rethinking attention with performers
[5]  
Denton Emily, 2018, P MACHINE LEARNING R, V80
[6]  
Dhariwal P, 2021, ADV NEUR IN, V34
[7]  
Ebert Frederik, 2017, PR MACH LEARN RES, V78
[8]  
Finn C., 2016, Adv Neural Inf Process Syst, P64, DOI DOI 10.48550/ARXIV.1605.07157
[9]  
Franceschi Jean-Yves, 2020, INT C MACHINE LEARNI, P3233
[10]   Generative Adversarial Networks [J].
Goodfellow, Ian ;
Pouget-Abadie, Jean ;
Mirza, Mehdi ;
Xu, Bing ;
Warde-Farley, David ;
Ozair, Sherjil ;
Courville, Aaron ;
Bengio, Yoshua .
COMMUNICATIONS OF THE ACM, 2020, 63 (11) :139-144