Free-Form Composition Networks for Egocentric Action Recognition

被引:0
|
作者
Wang, Haoran [1 ]
Cheng, Qinghua [1 ]
Yu, Baosheng [2 ]
Zhan, Yibing [3 ]
Tao, Dapeng [4 ,5 ]
Ding, Liang [6 ]
Ling, Haibin [6 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, Shenyang 110819, Peoples R China
[2] Univ Sydney, Fac Engn, Sch Comp Sci, Sydney, NSW 2008, Australia
[3] JD Explore Acad, Beijing 100176, Peoples R China
[4] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650091, Yunnan, Peoples R China
[5] Yunnan Key Lab Media Convergence, Kunming 650051, Peoples R China
[6] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
关键词
Feature extraction; Training data; Data models; Visualization; Training; Semantics; Data mining; Compositional learning; data scarcity; egocentric action recognition; ATTENTION;
D O I
10.1109/TCSVT.2024.3406546
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Egocentric action recognition is gaining significant attention in the field of human action recognition. In this paper, we address data scarcity issue in egocentric action recognition from a compositional generalization perspective. To tackle this problem, we propose a free-form composition network (FFCN) that can simultaneously learn disentangled verb, preposition, and noun representations, and then use them to compose new samples in the feature space for rare classes of action videos. First, we use a graph to capture the spatial-temporal relations among different hand/object instances in each action video. We thus decompose each action into a set of verb and preposition spatial-temporal representations using the edge features in the graph. The temporal decomposition extracts verb and preposition representations from different video frames, while the spatial decomposition adaptively learns verb and preposition representations from action-related instances in each frame. With these spatial-temporal representations of verbs and prepositions, we can compose new samples for those rare classes in a free-form manner, which is not restricted to a rigid form of a verb and a noun. The proposed FFCN can directly generate new training data samples for rare classes, hence significantly improve action recognition performance. We evaluated our method on three popular egocentric action recognition datasets, Something-Something V2, H2O, and EPIC-KITCHENS-100, and the experimental results demonstrate the effectiveness of the proposed method for handling data scarcity problems, including long-tailed and few-shot egocentric action recognition.
引用
收藏
页码:9967 / 9978
页数:12
相关论文
共 50 条
  • [31] Dense Semantics-Assisted Networks for Video Action Recognition
    Luo, Haonan
    Lin, Guosheng
    Yao, Yazhou
    Tang, Zhenmin
    Wu, Qingyao
    Hua, Xiansheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (05) : 3073 - 3084
  • [32] HIGH-QUALITY SOLID-MODELING SYSTEM WITH FREE-FORM SURFACES
    HIGASHI, M
    COMPUTER-AIDED DESIGN, 1993, 25 (03) : 172 - 183
  • [33] SOS! Self-supervised Learning over Sets of Handled Objects in Egocentric Action Recognition
    Escorcia, Victor
    Guerrero, Ricardo
    Zhu, Xiatian
    Martinez, Brais
    COMPUTER VISION, ECCV 2022, PT XIII, 2022, 13673 : 604 - 620
  • [34] Enhanced Gradient-Based Local Feature Descriptors by Saliency Map for Egocentric Action Recognition
    Zuo, Zheming
    Wei, Bo
    Chao, Fei
    Qu, Yanpeng
    Peng, Yonghong
    Yang, Longzhi
    APPLIED SYSTEM INNOVATION, 2019, 2 (01) : 1 - 14
  • [35] 3D Modeling of Free-Form Object (Interpolation, Visualization and Volume Estimation)
    Sisojevs, Aleksandrs
    Krechetova, Katrina
    Glazs, Aleksandrs
    WSCG 2009, COMMUNICATION PAPERS PROCEEDINGS, 2009, : 125 - +
  • [36] Adapting Action Recognition Neural Networks for Automated Infantile Spasm Detection
    Diop, Samuel
    Essid, Nouha
    Jouen, Francois
    Bergounioux, Jean
    Trabelsi, Imen
    IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, 2024, 32 : 3751 - 3760
  • [37] 3D Reconstruction of Free-Form Surface Based on Color Structured Light
    Yang Fan
    Ding Xiaojian
    Cao Jie
    ACTA OPTICA SINICA, 2021, 41 (02)
  • [38] NeRF-In: Free-Form Inpainting for Pretrained NeRF With RGB-D Priors
    Shen, I-Chao
    Liu, Hao-Kang
    Chen, Bing-Yu
    IEEE COMPUTER GRAPHICS AND APPLICATIONS, 2024, 44 (02) : 100 - 109
  • [39] Cross-View Generalisation in Action Recognition: Feature Design for Transitioning from Exocentric To Egocentric Views
    Rocha, Bernardo
    Moreno, Plinio
    Bernardino, Alexandre
    ROBOT 2023: SIXTH IBERIAN ROBOTICS CONFERENCE ADVANCES IN ROBOTICS, VOL 1, 2024, 976 : 155 - 166
  • [40] Device-Free Human Gesture Recognition With Generative Adversarial Networks
    Wang, Jie
    Zhang, Liang
    Wang, Changcheng
    Ma, Xiaorui
    Gao, Qinghua
    Lin, Bin
    IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (08) : 7678 - 7688