Free-Form Composition Networks for Egocentric Action Recognition

被引:0
|
作者
Wang, Haoran [1 ]
Cheng, Qinghua [1 ]
Yu, Baosheng [2 ]
Zhan, Yibing [3 ]
Tao, Dapeng [4 ,5 ]
Ding, Liang [6 ]
Ling, Haibin [6 ]
机构
[1] Northeastern Univ, Coll Informat Sci & Engn, Shenyang 110819, Peoples R China
[2] Univ Sydney, Fac Engn, Sch Comp Sci, Sydney, NSW 2008, Australia
[3] JD Explore Acad, Beijing 100176, Peoples R China
[4] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650091, Yunnan, Peoples R China
[5] Yunnan Key Lab Media Convergence, Kunming 650051, Peoples R China
[6] SUNY Stony Brook, Dept Comp Sci, Stony Brook, NY 11794 USA
关键词
Feature extraction; Training data; Data models; Visualization; Training; Semantics; Data mining; Compositional learning; data scarcity; egocentric action recognition; ATTENTION;
D O I
10.1109/TCSVT.2024.3406546
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Egocentric action recognition is gaining significant attention in the field of human action recognition. In this paper, we address data scarcity issue in egocentric action recognition from a compositional generalization perspective. To tackle this problem, we propose a free-form composition network (FFCN) that can simultaneously learn disentangled verb, preposition, and noun representations, and then use them to compose new samples in the feature space for rare classes of action videos. First, we use a graph to capture the spatial-temporal relations among different hand/object instances in each action video. We thus decompose each action into a set of verb and preposition spatial-temporal representations using the edge features in the graph. The temporal decomposition extracts verb and preposition representations from different video frames, while the spatial decomposition adaptively learns verb and preposition representations from action-related instances in each frame. With these spatial-temporal representations of verbs and prepositions, we can compose new samples for those rare classes in a free-form manner, which is not restricted to a rigid form of a verb and a noun. The proposed FFCN can directly generate new training data samples for rare classes, hence significantly improve action recognition performance. We evaluated our method on three popular egocentric action recognition datasets, Something-Something V2, H2O, and EPIC-KITCHENS-100, and the experimental results demonstrate the effectiveness of the proposed method for handling data scarcity problems, including long-tailed and few-shot egocentric action recognition.
引用
收藏
页码:9967 / 9978
页数:12
相关论文
共 50 条
  • [21] Curved Reflection Symmetric Axes on Free-Form Surfaces and Their Extraction
    Quan, Lulin
    Zhang, Yang
    Tang, Kai
    IEEE TRANSACTIONS ON AUTOMATION SCIENCE AND ENGINEERING, 2018, 15 (01) : 111 - 126
  • [22] A sketching interface for feature curve recovery of free-form surfaces
    Dekkers, Ellen
    Kobbelt, Leif
    Pawlicki, Richard
    Smith, Randall C.
    COMPUTER-AIDED DESIGN, 2011, 43 (07) : 771 - 780
  • [23] Free-form modeling in bilateral BREP and CSG representation schemes
    Menon, J
    Guo, BI
    INTERNATIONAL JOURNAL OF COMPUTATIONAL GEOMETRY & APPLICATIONS, 1998, 8 (5-6) : 537 - 575
  • [24] Deep Residual Split Directed Graph Convolutional Neural Networks for Action Recognition
    Fu, Bo
    Fu, Shilin
    Wang, Liyan
    Dong, Yuhan
    Ren, Yonggong
    IEEE MULTIMEDIA, 2020, 27 (04) : 9 - 17
  • [25] Masked Video and Body-Worn IMU Autoencoder for Egocentric Action Recognition
    Zhang, Mingfang
    Huang, Yifei
    Liu, Ruicong
    Sato, Yoichi
    COMPUTER VISION-ECCV 2024, PT XVIII, 2025, 15076 : 312 - 330
  • [26] An new approach of visualization of free-form surfaces by a ray tracing method
    Sisojevs, A.
    Glazs, A.
    2008 IEEE MEDITERRANEAN ELECTROTECHNICAL CONFERENCE, VOLS 1 AND 2, 2008, : 851 - 854
  • [27] Fuzzy Ontology-based Approach to Analyzing the Free-form Answers
    Zarubin, A. A.
    Koval, A. R.
    Filippov, A. A.
    Moshkin, V. S.
    PROCEEDINGS OF THE 2018 3RD RUSSIAN-PACIFIC CONFERENCE ON COMPUTER TECHNOLOGY AND APPLICATIONS (RPC), 2018,
  • [28] Pose-Guided Graph Convolutional Networks for Skeleton-Based Action Recognition
    Chen, Han
    Jiang, Yifan
    Ko, Hanseok
    IEEE ACCESS, 2022, 10 : 111725 - 111731
  • [29] Transductive Zero-Shot Action Recognition via Visually Connected Graph Convolutional Networks
    Xu, Yangyang
    Han, Chu
    Qin, Jing
    Xu, Xuemiao
    Han, Guoqiang
    He, Shengfeng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2021, 32 (08) : 3761 - 3769
  • [30] Spatio-Temporal Attention Networks for Action Recognition and Detection
    Li, Jun
    Liu, Xianglong
    Zhang, Wenxuan
    Zhang, Mingyuan
    Song, Jingkuan
    Sebe, Nicu
    IEEE TRANSACTIONS ON MULTIMEDIA, 2020, 22 (11) : 2990 - 3001