Free-Form Composition Networks for Egocentric Action Recognition

被引:0
|
作者
Wang, Haoran [1 ]
Cheng, Qinghua [1 ]
Yu, Baosheng [2 ]
Zhan, Yibing [3 ]
Tao, Dapeng [4 ,5 ]
Ding, Liang [6 ]
Ling, Haibin [6 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, Shenyang 110819, Peoples R China
[2] Univ Sydney, Fac Engn, Sch Comp Sci, Sydney, NSW 2008, Australia
[3] JD Explore Acad, Beijing 100176, Peoples R China
[4] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650091, Yunnan, Peoples R China
[5] Yunnan Key Lab Media Convergence, Kunming 650051, Peoples R China
[6] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
关键词
Feature extraction; Training data; Data models; Visualization; Training; Semantics; Data mining; Compositional learning; data scarcity; egocentric action recognition; ATTENTION;
D O I
10.1109/TCSVT.2024.3406546
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Egocentric action recognition is gaining significant attention in the field of human action recognition. In this paper, we address data scarcity issue in egocentric action recognition from a compositional generalization perspective. To tackle this problem, we propose a free-form composition network (FFCN) that can simultaneously learn disentangled verb, preposition, and noun representations, and then use them to compose new samples in the feature space for rare classes of action videos. First, we use a graph to capture the spatial-temporal relations among different hand/object instances in each action video. We thus decompose each action into a set of verb and preposition spatial-temporal representations using the edge features in the graph. The temporal decomposition extracts verb and preposition representations from different video frames, while the spatial decomposition adaptively learns verb and preposition representations from action-related instances in each frame. With these spatial-temporal representations of verbs and prepositions, we can compose new samples for those rare classes in a free-form manner, which is not restricted to a rigid form of a verb and a noun. The proposed FFCN can directly generate new training data samples for rare classes, hence significantly improve action recognition performance. We evaluated our method on three popular egocentric action recognition datasets, Something-Something V2, H2O, and EPIC-KITCHENS-100, and the experimental results demonstrate the effectiveness of the proposed method for handling data scarcity problems, including long-tailed and few-shot egocentric action recognition.
引用
收藏
页码:9967 / 9978
页数:12
相关论文
共 50 条
  • [41] Graph Edge Convolutional Neural Networks for Skeleton-Based Action Recognition
    Zhang, Xikun
    Xu, Chang
    Tian, Xinmei
    Tao, Dacheng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (08) : 3047 - 3060
  • [42] Aerodynamic Design Optimization of an Axial Flow Compressor Stator Using Parameterized Free-Form Deformation
    Adjei, Richard Amankwa
    Wang, WeiZhe
    Liu, YingZheng
    JOURNAL OF ENGINEERING FOR GAS TURBINES AND POWER-TRANSACTIONS OF THE ASME, 2019, 141 (10):
  • [43] Free-form feature classification for finite element meshing based on shape descriptors and machine learning
    Takaishi I.
    Kanai S.
    Date H.
    Takashima H.
    Computer-Aided Design and Applications, 2020, 17 (05): : 1049 - 1066
  • [44] Emotion and Gesture Guided Action Recognition in Videos Using Supervised Deep Networks
    Nigam, Nitika
    Dutta, Tanima
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2023, 10 (05) : 2546 - 2556
  • [45] Student Evaluation of Teaching: A Study Exploring Student Rating Instrument Free-form Text Comments
    Stupans I.
    McGuren T.
    Babey A.M.
    Innovative Higher Education, 2016, 41 (1) : 33 - 42
  • [46] Dynamic Sampling Networks for Efficient Action Recognition in Videos
    Zheng, Yin-Dong
    Liu, Zhaoyang
    Lu, Tong
    Wang, Limin
    IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 (29) : 7970 - 7983
  • [47] You Don't Drink a Cupboard: Improving Egocentric Action Recognition with Co-occurrence of Verbs and Nouns
    Kojima, Hiroki
    Kaneko, Naoshi
    Ito, Seiya
    Sumi, Kazuhiko
    FIFTEENTH INTERNATIONAL CONFERENCE ON QUALITY CONTROL BY ARTIFICIAL VISION, 2021, 11794
  • [48] Batch Entropy Supervised Convolutional Neural Networks for Feature Extraction and Harmonizing for Action Recognition
    Hossain, Md Imtiaz
    Siddique, Ashraf
    Hossain, Md Alamgir
    Hossain, Md Delowar
    Huh, Eui-Nam
    IEEE ACCESS, 2020, 8 : 206427 - 206444
  • [49] Cross-Modality Compensation Convolutional Neural Networks for RGB-D Action Recognition
    Cheng, Jun
    Ren, Ziliang
    Zhang, Qieshi
    Gao, Xiangyang
    Hao, Fusheng
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (03) : 1498 - 1509
  • [50] Multi-Stream Interaction Networks for Human Action Recognition
    Wang, Haoran
    Yu, Baosheng
    Li, Jiaqi
    Zhang, Linlin
    Chen, Dongyue
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (05) : 3050 - 3060