Action anticipation for collaborative environments: The impact of contextual information and uncertainty-based prediction

被引:5
作者
Canuto, Clebeson [1 ]
Moreno, Plinio [2 ]
Samatelo, Jorge [1 ]
Vassallo, Raquel [1 ]
Santos-Victor, Jose [2 ]
机构
[1] Univ Fed Espirito Santo, Dept Elect Engn, Room 20,CT 2,Av Fernando Ferrari 514, BR-29075910 Vitoria, ES, Brazil
[2] Univ Lisbon, Inst Syst & Robot, Inst Super Tecn, Floor 7,North Tower,Av Rovisco Pais 1, P-1049001 Lisbon, Portugal
关键词
Action anticipation; Early action prediction; Context information; Bayesian deep learning; Uncertainty; ACTION RECOGNITION;
D O I
10.1016/j.neucom.2020.07.135
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To interact with humans in collaborative environments, machines need to be able to predict (i.e., anticipate) future events, and execute actions in a timely manner. However, the observation of the human limb movements may not be sufficient to anticipate their actions unambiguously. In this work, we consider two additional sources of information (i.e., context) over time, gaze, movement and object information, and study how these additional contextual cues improve the action anticipation performance. We address action anticipation as a classification task, where the model takes the available information as the input and predicts the most likely action. We propose to use the uncertainty about each prediction as an online decision-making criterion for action anticipation. Uncertainty is modeled as a stochastic process applied to a time-based neural network architecture, which improves the conventional class-likelihood (i.e., deterministic) criterion. The main contributions of this paper are fourfold: (i) We propose a novel and effective decision-making criterion that can be used to anticipate actions even in situations of high ambiguity; (ii) we propose a deep architecture that outperforms previous results in the action anticipation task when using the Acticipate collaborative dataset; (iii) we show that contextual information is important to disambiguate the interpretation of similar actions; and (iv) we also provide a formal description of three existing performance metrics that can be easily used to evaluate action anticipation models. Our results on the Acticipate dataset showed the importance of contextual information and the uncertainty criterion for action anticipation. We achieve an average accuracy of 98:75% in the anticipation task using only an average of 25% of observations. Also, considering that a good anticipation model should perform well in the action recognition task, we achieve an average accuracy of 100% in action recognition on the Acticipate dataset, when the entire observation set is used. (C) 2020 Elsevier B.V. All rights reserved.
引用
收藏
页码:301 / 318
页数:18
相关论文
共 51 条
[1]  
Abowd GD, 1999, LECT NOTES COMPUT SC, V1707, P304
[2]  
Agethen S, 2019, P IEEE C COMP VIS PA
[3]  
Aliakbarian M.S, ARXIV PREPRINT ARXIV
[4]   Encouraging LSTMs to Anticipate Actions Very Early [J].
Aliakbarian, Mohammad Sadegh ;
Saleh, Fatemeh Sadat ;
Salzmann, Mathieu ;
Fernando, Basura ;
Petersson, Lars ;
Andersson, Lars .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :280-289
[5]   Object Level Visual Reasoning in Videos [J].
Baradel, Fabien ;
Neverova, Natalia ;
Wolf, Christian ;
Mille, Julien ;
Mori, Greg .
COMPUTER VISION - ECCV 2018, PT XIII, 2018, 11217 :106-122
[6]   Action Recognition with Dynamic Image Networks [J].
Bilen, Hakan ;
Fernando, Basura ;
Gavves, Efstratios ;
Vedaldi, Andrea .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (12) :2799-2813
[7]   Dynamic Image Networks for Action Recognition [J].
Bilen, Hakan ;
Fernando, Basura ;
Gavves, Efstratios ;
Vedaldi, Andrea ;
Gould, Stephen .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3034-3042
[8]   Variational Inference: A Review for Statisticians [J].
Blei, David M. ;
Kucukelbir, Alp ;
McAuliffe, Jon D. .
JOURNAL OF THE AMERICAN STATISTICAL ASSOCIATION, 2017, 112 (518) :859-877
[9]   Linear latent low dimensional space for online early action recognition and prediction [J].
Bloom, Victoria ;
Argyriou, Vasileios ;
Makris, Dimitrios .
PATTERN RECOGNITION, 2017, 72 :532-547
[10]  
Blundell C, 2015, PR MACH LEARN RES, V37, P1613