Learning Temporal Transformations from Time-Lapse Videos

被引:82
作者
Zhou, Yipin [1 ]
Berg, Tamara L. [1 ]
机构
[1] Univ North Carolina Chapel Hill, Chapel Hill, NC 27514 USA
来源
COMPUTER VISION - ECCV 2016, PT VIII | 2016年 / 9912卷
关键词
Generation; Temporal prediction; Time-lapse video;
D O I
10.1007/978-3-319-46484-8_16
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Based on life-long observations of physical, chemical, and biologic phenomena in the natural world, humans can often easily picture in their minds what an object will look like in the future. But, what about computers? In this paper, we learn computational models of object transformations from time-lapse videos. In particular, we explore the use of generative models to create depictions of objects at future times. These models explore several different prediction tasks: generating a future state given a single depiction of an object, generating a future state given two depictions of an object at different times, and generating future states recursively in a recurrent framework. We provide both qualitative and quantitative evaluations of the generated results, and also conduct a human evaluation to compare variations of our models.
引用
收藏
页码:262 / 277
页数:16
相关论文
共 33 条
[1]  
[Anonymous], ICML
[2]  
[Anonymous], IJCV
[3]  
[Anonymous], 2015, CVPR
[4]  
[Anonymous], 1986, PARALLEL DISTRIBUTED
[5]  
[Anonymous], 1987, PARALLEL DISTRIBUTED
[6]  
[Anonymous], 2015, CORR
[7]  
[Anonymous], 2014, CVPR
[8]  
[Anonymous], 2015, ICML
[9]  
[Anonymous], 2015, CORR
[10]  
[Anonymous], 2015, ICCV