MEVG: Multi-event Video Generation with Text-to-Video Models

被引:0
作者
Oh, Gyeongrok [1 ]
Jeong, Jaehwan [1 ]
Kim, Sieun [1 ]
Byeon, Wonmin [2 ]
Kim, Jinkyu [1 ]
Kim, Sungwoong [1 ]
Kim, Sangpil [1 ]
机构
[1] Korea Univ, Seoul, South Korea
[2] NVIDIA, Santa Clara, CA USA
来源
COMPUTER VISION-ECCV 2024, PT XLIII | 2025年 / 15101卷
关键词
Multi-event video generation; Training-free; Diffusion model;
D O I
10.1007/978-3-031-72775-7_23
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We introduce a novel diffusion-based video generation method, generating a video showing multiple events given multiple individual sentences from the user. Our method does not require a large-scale video dataset since our method uses a pre-trained diffusion-based text-to-video generative model without a fine-tuning process. Specifically, we propose a last frame-aware diffusion process to preserve visual coherence between consecutive videos where each video consists of different events by initializing the latent and simultaneously adjusting noise in the latent to enhance the motion dynamic in a generated video. Furthermore, we find that the iterative update of latent vectors by referring to all the preceding frames maintains the global appearance across the frames in a video clip. To handle dynamic text input for video generation, we utilize a novel prompt generator that transfers course text messages from the user into the multiple optimal prompts for the text-to-video diffusion model. Extensive experiments and user studies show that our proposed method is superior to other video-generative models in terms of temporal coherency of content and semantics. Video examples are available on our project page: https://kuai-lab.github.io/eccv2024mevg.
引用
收藏
页码:401 / 418
页数:18
相关论文
共 51 条
[1]  
Bar-Tal O, 2024, Arxiv, DOI arXiv:2401.12945
[2]   Align your Latents: High-Resolution Video Synthesis with Latent Diffusion Models [J].
Blattmann, Andreas ;
Rombach, Robin ;
Ling, Huan ;
Dockhorn, Tim ;
Kim, Seung Wook ;
Fidler, Sanja ;
Kreis, Karsten .
2023 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2023, :22563-22575
[3]  
Brooks T, 2022, ADV NEUR IN
[4]   Pix2Video: Video Editing using Image Diffusion [J].
Ceylan, Duygu ;
Huang, Chun-Hao P. ;
Mitra, Niloy J. .
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, :23149-23160
[5]   VideoCrafter2: Overcoming Data Limitations for High-Quality Video Diffusion Models [J].
Chen, Haoxin ;
Zhang, Yong ;
Cun, Xiaodong ;
Xia, Menghan ;
Wang, Xintao ;
Weng, Chao ;
Shan, Ying .
2024 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2024, 2024, :7310-7320
[6]  
Chen X., 2023, 12 INT C LEARN REPR
[7]  
Ding M, 2022, ADV NEUR IN
[8]  
Dong WK, 2023, Arxiv, DOI arXiv:2305.04441
[9]   Structure and Content-Guided Video Synthesis with Diffusion Models [J].
Esser, Patrick ;
Chiu, Johnathan ;
Atighehchian, Parmida ;
Granskog, Jonathan ;
Germanidis, Anastasis .
2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION, ICCV, 2023, :7312-7322
[10]  
Ge SW, 2023, IEEE I CONF COMP VIS, P22873, DOI 10.1109/ICCV51070.2023.02096