Egocentric Action Recognition by Automatic Relation Modeling

被引:8
|
作者
Li, Haoxin [1 ]
Zheng, Wei-Shi [2 ,3 ,4 ]
Zhang, Jianguo [5 ,6 ]
Hu, Haifeng [1 ]
Lu, Jiwen [7 ]
Lai, Jian-Huang [2 ,8 ]
机构
[1] Sun Yat sen Univ, Sch Elect & Informat Technol, Guangzhou 510275, Peoples R China
[2] Sun Yat sen Univ, Sch Comp Sci & Engn, Guangzhou 510275, Peoples R China
[3] Peng Cheng Lab, Shenzhen 518005, Peoples R China
[4] Sun Yat sen Univ, Key Lab Machine Intelligence & Adv Comp, Minist Educ, Guangzhou 510275, Peoples R China
[5] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Guangdong, Peoples R China
[6] Southern Univ Sci & Technol, Res Inst Trustworthy Autonomous Syst, Shenzhen 518055, Peoples R China
[7] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRist, Dept Automat, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
[8] Guangdong Prov Key Lab Informat Secur, Shenzhen 518040, Guangdong, Peoples R China
关键词
Egocentric action recognition; human-object interaction recognition; HISTOGRAMS; NETWORK;
D O I
10.1109/TPAMI.2022.3148790
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Egocentric videos, which record the daily activities of individuals from a first-person point of view, have attracted increasing attention during recent years because of their growing use in many popular applications, including life logging, health monitoring and virtual reality. As a fundamental problem in egocentric vision, one of the tasks of egocentric action recognition aims to recognize the actions of the camera wearers from egocentric videos. In egocentric action recognition, relation modeling is important, because the interactions between the camera wearer and the recorded persons or objects form complex relations in egocentric videos. However, only a few of existing methods model the relations between the camera wearer and the interacting persons for egocentric action recognition, and moreover they require prior knowledge or auxiliary data to localize the interacting persons. In this work, we consider modeling the relations in a weakly supervised manner, i.e., without using annotations or prior knowledge about the interacting persons or objects, for egocentric action recognition. We form a weakly supervised framework by unifying automatic interactor localization and explicit relation modeling for the purpose of automatic relation modeling. First, we learn to automatically localize the interactors, i.e., the body parts of the camera wearer and the persons or objects that the camera wearer interacts with, by learning a series of keypoints directly from video data to localize the action-relevant regions with only action labels and some constraints on these keypoints. Second, more importantly, to explicitly model the relations between the interactors, we develop an ego-relational LSTM (long short-term memory) network with several candidate connections to model the complex relations in egocentric videos, such as the temporal, interactive, and contextual relations. In particular, to reduce human efforts and manual interventions needed to construct an optimal ego-relational LSTM structure, we search for the optimal connections by employing a differentiable network architecture search mechanism, which automatically constructs the ego-relational LSTM network to explicitly model different relations for egocentric action recognition. We conduct extensive experiments on egocentric video datasets to illustrate the effectiveness of our method.
引用
收藏
页码:489 / 507
页数:19
相关论文
共 50 条
  • [31] Action recognition in compressed domains: A survey
    Ming, Yue
    Zhou, Jiangwan
    Hu, Nannan
    Feng, Fan
    Zhao, Panzi
    Lyu, Boyang
    Yu, Hui
    NEUROCOMPUTING, 2024, 577
  • [32] Action Recognition with Dynamic Image Networks
    Bilen, Hakan
    Fernando, Basura
    Gavves, Efstratios
    Vedaldi, Andrea
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (12) : 2799 - 2813
  • [33] Human Action Recognition with Attribute Regularization
    Zhang, Zhong
    Wang, Chunheng
    Xiao, Baihua
    Zhou, Wen
    Liu, Shuang
    2012 IEEE NINTH INTERNATIONAL CONFERENCE ON ADVANCED VIDEO AND SIGNAL-BASED SURVEILLANCE (AVSS), 2012, : 112 - 117
  • [34] Power difference template for action recognition
    Wang, Liangliang
    Li, Ruifeng
    Fang, Yajun
    MACHINE VISION AND APPLICATIONS, 2017, 28 (5-6) : 463 - 473
  • [35] Learning Visual Tempo for Action Recognition
    Nie, Mu
    Yang, Sen
    Yang, Wankou
    ARTIFICIAL INTELLIGENCE AND ROBOTICS, ISAIR 2022, PT I, 2022, 1700 : 139 - 155
  • [36] Generative Action Description Prompts for Skeleton-based Action Recognition
    Xiang, Wangmeng
    Li, Chao
    Zhou, Yuxuan
    Wang, Biao
    Zhang, Lei
    2023 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2023), 2023, : 10242 - 10251
  • [37] Human Action Recognition via Depth Maps Body Parts of Action
    Farooq, Adnan
    Farooq, Faisal
    Anh Vu Le
    KSII TRANSACTIONS ON INTERNET AND INFORMATION SYSTEMS, 2018, 12 (05): : 2327 - 2347
  • [38] Relation-Aware Facial Expression Recognition
    Xia, Yifan
    Yu, Hui
    Wang, Xiao
    Jian, Muwei
    Wang, Fei-Yue
    IEEE TRANSACTIONS ON COGNITIVE AND DEVELOPMENTAL SYSTEMS, 2022, 14 (03) : 1143 - 1154
  • [39] Automatic River Planform Recognition Tested on Chilean Rivers
    Nardini, Andrea Gianni Cristoforo
    Salas, Francisca
    Carrasco, Zoila
    Valenzuela, Noelia
    Rojas, Renzo
    Vargas-Baecheler, Jose
    Yepez, Santiago
    WATER, 2023, 15 (14)
  • [40] Computer Analysis and Automatic Recognition Technology of Music Emotion
    Xiang, Yuehua
    MATHEMATICAL PROBLEMS IN ENGINEERING, 2022, 2022