Learning to Anticipate Egocentric Actions by Imagination

被引:50
作者
Wu, Yu [1 ]
Zhu, Linchao [1 ]
Wang, Xiaohan [1 ]
Yang, Yi [1 ]
Wu, Fei [2 ]
机构
[1] Univ Technol Sydney, Australian Artificial Intelligence Inst, Ultimo, NSW 2007, Australia
[2] Zhejiang Univ, Coll Comp Sci & Technol, Hangzhou 310027, Peoples R China
关键词
Task analysis; Uncertainty; Predictive models; Visualization; Recurrent neural networks; Image segmentation; Image recognition; Action anticipation; action prediction; egocentric videos;
D O I
10.1109/TIP.2020.3040521
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Anticipating actions before they are executed is crucial for a wide range of practical applications, including autonomous driving and robotics. In this paper, we study the egocentric action anticipation task, which predicts future action seconds before it is performed for egocentric videos. Previous approaches focus on summarizing the observed content and directly predicting future action based on past observations. We believe it would benefit the action anticipation if we could mine some cues to compensate for the missing information of the unobserved frames. We then propose to decompose the action anticipation into a series of future feature predictions. We imagine how the visual feature changes in the near future and then predicts future action labels based on these imagined representations. Differently, our ImagineRNN is optimized in a contrastive learning way instead of feature regression. We utilize a proxy task to train the ImagineRNN, i.e., selecting the correct future states from distractors. We further improve ImagineRNN by residual anticipation, i.e., changing its target to predicting the feature difference of adjacent frames instead of the frame content. This promotes the network to focus on our target, i.e., the future action, as the difference between adjacent frame features is more important for forecasting the future. Extensive experiments on two large-scale egocentric action datasets validate the effectiveness of our method. Our method significantly outperforms previous methods on both the seen test set and the unseen test set of the EPIC Kitchens Action Anticipation Challenge.
引用
收藏
页码:1143 / 1152
页数:10
相关论文
共 58 条
[1]   When will you do what? - Anticipating Temporal Occurrences of Activities [J].
Abu Farha, Yazan ;
Richard, Alexander ;
Gall, Juergen .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :5343-5352
[2]  
Abu-El-Haija S., 2016, arXiv
[3]   Encouraging LSTMs to Anticipate Actions Very Early [J].
Aliakbarian, Mohammad Sadegh ;
Saleh, Fatemeh Sadat ;
Salzmann, Mathieu ;
Fernando, Basura ;
Petersson, Lars ;
Andersson, Lars .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :280-289
[4]  
[Anonymous], 2016, P 29 IEEE C COMPUTER
[5]  
[Anonymous], 2017, ICCV
[6]  
[Anonymous], 2016, PROC CVPR IEEE, DOI DOI 10.1109/CVPR.2016.214
[7]  
[Anonymous], 2018, P BMVC
[8]  
[Anonymous], 2017, J VIS COMMUN IMAGE R, DOI DOI 10.1016/j.jvcir.2017.10.004
[9]  
Becattini F., 2017, P BMVC
[10]   Action anticipation for collaborative environments: The impact of contextual information and uncertainty-based prediction [J].
Canuto, Clebeson ;
Moreno, Plinio ;
Samatelo, Jorge ;
Vassallo, Raquel ;
Santos-Victor, Jose .
NEUROCOMPUTING, 2021, 444 :301-318