Latent Video Transformer

被引:18
作者
Rakhimov, Ruslan [1 ]
Volkhonskiy, Denis [1 ]
Artemov, Alexey [1 ]
Zorin, Denis [1 ,2 ]
Burnaev, Evgeny [1 ]
机构
[1] Skolkovo Inst Sci & Technol, Moscow, Russia
[2] NYU, New York, NY USA
来源
VISAPP: PROCEEDINGS OF THE 16TH INTERNATIONAL JOINT CONFERENCE ON COMPUTER VISION, IMAGING AND COMPUTER GRAPHICS THEORY AND APPLICATIONS - VOL. 5: VISAPP | 2021年
基金
俄罗斯科学基金会;
关键词
Video Generation; Deep Learning; Generative Adversarial Networks;
D O I
10.5220/0010241801010112
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The video generation task can be formulated as a prediction of future video frames given some past frames. Recent generative models for videos face the problem of high computational requirements. Some models require up to 512 Tensor Processing Units for parallel training. In this work, we address this problem via modeling the dynamics in a latent space. After the transformation of frames into the latent space, our model predicts latent representation for the next frames in an autoregressive manner. We demonstrate the performance of our approach on BAIR Robot Pushing and Kinetics-600 datasets. The approach tends to reduce requirements to 8 Graphical Processing Units for training the models while maintaining comparable generation quality.
引用
收藏
页码:101 / 112
页数:12
相关论文
共 49 条
[1]  
Acharya D., 2018, ARXIV PREPRINT ARXIV
[2]  
[Anonymous], 2016, PROC 4 INT C LEARN R
[3]  
[Anonymous], 2017, ADV NEURAL INFORM PR
[4]  
Baevski A., 2018, ARXIV PREPRINT ARXIV
[5]  
Brock A., 2018, ARXIV PREPRINT ARXIV
[6]  
Carreira J., 2018, ARXIV PREPRINT ARXIV
[7]  
Chen X., 2017, 35 INT C MACH LEARN
[8]  
Clark A., 2019, ARXIV PREPRINT ARXIV
[9]  
Denton E., 2018, P MACHINE LEARNING R, P1174
[10]  
Denton E., 2018, ARXIV PREPRINT ARXIV