SOS! Self-supervised Learning over Sets of Handled Objects in Egocentric Action Recognition

被引:5
作者
Escorcia, Victor [1 ]
Guerrero, Ricardo [1 ]
Zhu, Xiatian [1 ]
Martinez, Brais [1 ]
机构
[1] Samsung AI Ctr Cambridge, Cambridge, England
来源
COMPUTER VISION, ECCV 2022, PT XIII | 2022年 / 13673卷
关键词
Handled objects; Egocentric action recognition; Self-supervised pre-training over sets; Long-tail setup;
D O I
10.1007/978-3-031-19778-9_35
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Learning an egocentric action recognition model from video data is challenging due to distractors in the background, e.g., irrelevant objects. Further integrating object information into an action model is hence beneficial. Existing methods often leverage a generic object detector to identify and represent the objects in the scene. However, several important issues remain. Object class annotations of good quality for the target domain (dataset) are still required for learning good object representation. Moreover, previous methods deeply couple existing action models with object representations, and thus need to retrain them jointly, leading to costly and inflexible integration. To overcome both limitations, we introduce Self-Supervised Learning Over Sets (SOS), an approach to pre-train a generic Objects In Contact (OIC) representation model from video object regions detected by an off-the-shelf hand-object contact detector. Instead of augmenting object regions individually as in conventional self-supervised learning, we view the action process as a means of natural data transformations with unique spatiotemporal continuity and exploit the inherent relationships among per-video object sets. Extensive experiments on two datasets, EPIC-KITCHENS-100 and EGTEA, show that our OIC significantly boosts the performance of multiple state-of-the-art video classification models.
引用
收藏
页码:604 / 620
页数:17
相关论文
共 60 条
  • [1] [Anonymous], 2020, ADV NEUR IN
  • [2] Lending A Hand: Detecting Hands and Recognizing Activities in Complex Egocentric Interactions
    Bambach, Sven
    Lee, Stefan
    Crandall, David J.
    Yu, Chen
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1949 - 1957
  • [3] Object Level Visual Reasoning in Videos
    Baradel, Fabien
    Neverova, Natalia
    Wolf, Christian
    Mille, Julien
    Mori, Greg
    [J]. COMPUTER VISION - ECCV 2018, PT XIII, 2018, 11217 : 106 - 122
  • [4] Bertasius G., 2020, NEURIPS
  • [5] Bulat A, 2021, ADV NEURAL INFORM PR, V34
  • [6] Caron M, 2020, ADV NEUR IN, V33
  • [7] Quo Vadis, Action Recognition? A New Model and the Kinetics Dataset
    Carreira, Joao
    Zisserman, Andrew
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 4724 - 4733
  • [8] Chen T., 2020, ARXIV
  • [9] Exploring Simple Siamese Representation Learning
    Chen, Xinlei
    He, Kaiming
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 15745 - 15753
  • [10] Damen D., 2021, IJCV