Self-Supervised Joint Encoding of Motion and Appearance for First Person Action Recognition

被引:7
作者
Planamente, Mirco [1 ,2 ]
Bottino, Andrea [1 ]
Caputo, Barbara [1 ,2 ]
机构
[1] Politecn Torino, Dept Control & Comp Engn, Turin, Italy
[2] Italian Inst Technol, Genoa, Italy
来源
2020 25TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR) | 2021年
关键词
Egocentric Vision; Action Recognition; Multi-task Learning; Motion Prediction; Self-supervised Learning;
D O I
10.1109/ICPR48806.2021.9411972
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Wearable cameras are becoming more and more popular in several applications, increasing the interest of the research community in developing approaches for recognizing actions from the first-person point of view. An open challenge in egocentric action recognition is that videos lack detailed information about the main actor's pose and thus tend to record only parts of the movement when focusing on manipulation tasks. Thus, the amount of information about the action itself is limited, making crucial the understanding of the manipulated objects and their context. Many previous works addressed this issue with two-stream architectures, where one stream is dedicated to modeling the appearance of objects involved in the action, and another to extracting motion features from optical flow. In this paper, we argue that learning features jointly from these two information channels is beneficial to capture the spatio-temporal correlations between the two better. To this end, we propose a single stream architecture able to do so, thanks to the addition of a self-supervised block that uses a pretext motion prediction task to intertwine motion and appearance knowledge. Experiments on several publicly available databases show the power of our approach.
引用
收藏
页码:8751 / 8758
页数:8
相关论文
共 50 条
[1]  
[Anonymous], 2019, CVPR, DOI DOI 10.1109/CVPR.2019.00037
[2]  
[Anonymous], 2019, BAIDU UTS SUBMISSION
[3]   Deep Clustering for Unsupervised Learning of Visual Features [J].
Caron, Mathilde ;
Bojanowski, Piotr ;
Joulin, Armand ;
Douze, Matthijs .
COMPUTER VISION - ECCV 2018, PT XIV, 2018, 11218 :139-156
[4]  
Cartas A., 2019, SEEING HEARING EGOCE
[5]   MARS: Motion-Augmented RGB Stream for Action Recognition [J].
Crasto, Nieves ;
Weinzaepfel, Philippe ;
Alahari, Karteek ;
Schmid, Cordelia .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :7874-7883
[6]   Scaling Egocentric Vision: The EPIC-KITCHENS Dataset [J].
Damen, Dima ;
Doughty, Hazel ;
Farinella, Giovanni Maria ;
Fidler, Sanja ;
Furnari, Antonino ;
Kazakos, Evangelos ;
Moltisanti, Davide ;
Munro, Jonathan ;
Perrett, Toby ;
Price, Will ;
Wray, Michael .
COMPUTER VISION - ECCV 2018, PT IV, 2018, 11208 :753-771
[7]   Two-frame motion estimation based on polynomial expansion [J].
Farnebäck, G .
IMAGE ANALYSIS, PROCEEDINGS, 2003, 2749 :363-370
[8]  
Fathi A, 2011, PROC CVPR IEEE
[9]   Rolling-Unrolling LSTMs for Action Anticipation from First-Person Video [J].
Furnari, Antonino ;
Farinella, Giovanni Maria .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (11) :4021-4036
[10]   First-Person Hand Action Benchmark with RGB-D Videos and 3D Hand Pose Annotations [J].
Garcia-Hernando, Guillermo ;
Yuan, Shanxin ;
Baek, Seungryul ;
Kim, Tae-Kyun .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :409-419