Egocentric Action Recognition by Automatic Relation Modeling

被引:8
|
作者
Li, Haoxin [1 ]
Zheng, Wei-Shi [2 ,3 ,4 ]
Zhang, Jianguo [5 ,6 ]
Hu, Haifeng [1 ]
Lu, Jiwen [7 ]
Lai, Jian-Huang [2 ,8 ]
机构
[1] Sun Yat sen Univ, Sch Elect & Informat Technol, Guangzhou 510275, Peoples R China
[2] Sun Yat sen Univ, Sch Comp Sci & Engn, Guangzhou 510275, Peoples R China
[3] Peng Cheng Lab, Shenzhen 518005, Peoples R China
[4] Sun Yat sen Univ, Key Lab Machine Intelligence & Adv Comp, Minist Educ, Guangzhou 510275, Peoples R China
[5] Southern Univ Sci & Technol, Dept Comp Sci & Engn, Shenzhen 518055, Guangdong, Peoples R China
[6] Southern Univ Sci & Technol, Res Inst Trustworthy Autonomous Syst, Shenzhen 518055, Peoples R China
[7] Tsinghua Univ, Beijing Natl Res Ctr Informat Sci & Technol BNRist, Dept Automat, State Key Lab Intelligent Technol & Syst, Beijing 100084, Peoples R China
[8] Guangdong Prov Key Lab Informat Secur, Shenzhen 518040, Guangdong, Peoples R China
关键词
Egocentric action recognition; human-object interaction recognition; HISTOGRAMS; NETWORK;
D O I
10.1109/TPAMI.2022.3148790
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Egocentric videos, which record the daily activities of individuals from a first-person point of view, have attracted increasing attention during recent years because of their growing use in many popular applications, including life logging, health monitoring and virtual reality. As a fundamental problem in egocentric vision, one of the tasks of egocentric action recognition aims to recognize the actions of the camera wearers from egocentric videos. In egocentric action recognition, relation modeling is important, because the interactions between the camera wearer and the recorded persons or objects form complex relations in egocentric videos. However, only a few of existing methods model the relations between the camera wearer and the interacting persons for egocentric action recognition, and moreover they require prior knowledge or auxiliary data to localize the interacting persons. In this work, we consider modeling the relations in a weakly supervised manner, i.e., without using annotations or prior knowledge about the interacting persons or objects, for egocentric action recognition. We form a weakly supervised framework by unifying automatic interactor localization and explicit relation modeling for the purpose of automatic relation modeling. First, we learn to automatically localize the interactors, i.e., the body parts of the camera wearer and the persons or objects that the camera wearer interacts with, by learning a series of keypoints directly from video data to localize the action-relevant regions with only action labels and some constraints on these keypoints. Second, more importantly, to explicitly model the relations between the interactors, we develop an ego-relational LSTM (long short-term memory) network with several candidate connections to model the complex relations in egocentric videos, such as the temporal, interactive, and contextual relations. In particular, to reduce human efforts and manual interventions needed to construct an optimal ego-relational LSTM structure, we search for the optimal connections by employing a differentiable network architecture search mechanism, which automatically constructs the ego-relational LSTM network to explicitly model different relations for egocentric action recognition. We conduct extensive experiments on egocentric video datasets to illustrate the effectiveness of our method.
引用
收藏
页码:489 / 507
页数:19
相关论文
共 50 条
  • [21] Event Recognition in Egocentric Videos Using a Novel Trajectory Based Feature
    Buddubariki, Vinodh
    Tulluri, Sunitha Gowd
    Mukherjee, Snehasis
    TENTH INDIAN CONFERENCE ON COMPUTER VISION, GRAPHICS AND IMAGE PROCESSING (ICVGIP 2016), 2016,
  • [22] A BERT-Based Joint Channel-Temporal Modeling for Action Recognition
    Yang, Man
    Gan, Lipeng
    Cao, Runze
    Li, Xiaochao
    IEEE SENSORS JOURNAL, 2023, 23 (19) : 23765 - 23779
  • [23] Modeling Temporal Visual Salience for Human Action Recognition Enabled Visual Anonymity Preservation
    Al-Obaidi, Salah
    Al-Khafaji, Hiba
    Abhayaratne, Charith
    IEEE ACCESS, 2020, 8 : 213806 - 213824
  • [24] Hierarchical Task-aware Temporal Modeling and Matching for few-shot action recognition
    Zhan, Yucheng
    Pan, Yijun
    Wu, Siying
    Zhang, Yueyi
    Sun, Xiaoyan
    NEUROCOMPUTING, 2025, 624
  • [25] On an algorithm for human action recognition
    Sahoo, Suraj Prakash
    Ari, Samit
    EXPERT SYSTEMS WITH APPLICATIONS, 2019, 115 : 524 - 534
  • [26] Towards understanding action recognition
    Jhuang, Hueihan
    Gall, Juergen
    Zuffi, Silvia
    Schmid, Cordelia
    Black, Michael J.
    2013 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2013, : 3192 - 3199
  • [27] SmallTAL: Real-Time Egocentric Online Temporal Action Localization for the Data-Impoverished
    Joyce, Eric C.
    Chen, Yao
    Neeter, Eduardo
    Mordohai, Philippos
    PRESENCE-VIRTUAL AND AUGMENTED REALITY, 2023, 32 : 179 - 203
  • [28] Human action recognition based on action relevance weighted encoding
    Yi, Yang
    Li, Ao
    Zhou, Xiaofeng
    SIGNAL PROCESSING-IMAGE COMMUNICATION, 2020, 80
  • [29] An overview of Automatic Speech Recognition Preprocessing Techniques
    Labied, Maria
    Belangour, Abdessamad
    Banane, Mouad
    Erraissi, Allae
    2022 INTERNATIONAL CONFERENCE ON DECISION AID SCIENCES AND APPLICATIONS (DASA), 2022, : 804 - 809
  • [30] Automatic Facial Expression Recognition Using DCNN
    Mayya, Veena
    Pai, Radhika M.
    Pai, Manohara M. M.
    PROCEEDINGS OF THE 6TH INTERNATIONAL CONFERENCE ON ADVANCES IN COMPUTING AND COMMUNICATIONS, 2016, 93 : 453 - 461