Dual Motion GAN for Future-Flow Embedded Video Prediction

被引:213
作者
Liang, Xiaodan [1 ]
Lee, Lisa [1 ]
Dai, Wei [2 ]
Xing, Eric P. [2 ]
机构
[1] Carnegie Mellon Univ, Pittsburgh, PA 15213 USA
[2] Petuum Inc, Pittsburgh, PA USA
来源
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV) | 2017年
基金
美国安德鲁·梅隆基金会;
关键词
D O I
10.1109/ICCV.2017.194
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Future frame prediction in videos is a promising avenue for unsupervised video representation learning. Video frames are naturally generated by the inherent pixel flows from preceding frames based on the appearance and motion dynamics in the video. However, existing methods focus on directly hallucinating pixel values, resulting in blurry predictions. In this paper, we develop a dual motion Generative Adversarial Net (GAN) architecture, which learns to explicitly enforce future-frame predictions to be consistent with the pixel-wise flows in the video through a dual-learning mechanism. The primal future-frame prediction and dual future-flow prediction form a closed loop, generating informative feedback signals to each other for better video prediction. To make both synthesized future frames and flows indistinguishable from reality, a dual adversarial training method is proposed to ensure that the future-flow prediction is able to help infer realistic future-frames, while the future-frame prediction in turn leads to realistic optical flows. Our dual motion GAN also handles natural motion uncertainty in different pixel locations with a new probabilistic motion encoder, which is based on variational autoencoders. Extensive experiments demonstrate that the proposed dual motion GAN significantly outperforms state-of-the-art approaches on synthesizing new video frames and predicting future flows. Our model generalizes well across diverse visual scenes and shows superiority in unsupervised video representation learning.
引用
收藏
页码:1762 / 1770
页数:9
相关论文
共 37 条
[1]  
[Anonymous], 2017, ICLR
[2]  
[Anonymous], 2015, THUMOS CHALLENGE ACT
[3]  
[Anonymous], IEEE T PATTERN ANAL
[4]  
[Anonymous], 2021, NEURAL NETW MACH
[5]  
[Anonymous], 2015, NIPS 15 P 28 INT C N
[6]  
[Anonymous], 2016, ARXIV161100850
[7]  
[Anonymous], 2017, CVPR
[8]  
[Anonymous], 2015, JMLR
[9]  
[Anonymous], 1997, Neural Computation
[10]  
[Anonymous], 2015, P NEURIPS