DDC3N: Doppler-Driven Convolutional 3D Network for Human Action Recognition

被引:2
作者
Toshpulatov, Mukhiddin [1 ]
Lee, Wookey [1 ]
Lee, Suan [2 ]
Yoon, Hoyoung [3 ]
Kang, U. Kang [3 ]
机构
[1] Inha Univ, Biomed Sci & Engn, Incheon 22212, South Korea
[2] Semyung Univ, Sch Comp Sci, Jecheon 27136, South Korea
[3] Seoul Natl Univ, Dept Comp Sci & Engn, Seoul 08826, South Korea
来源
IEEE ACCESS | 2024年 / 12卷
关键词
3D pose estimation; discriminator; deep neural network; deep learning; generator; mesh estimation; metadata; skeleton; top-down approach; motion embedding; optical flow map; channel-wise; spatiotemporal; doppler; dataset; action recognition; 2D;
D O I
10.1109/ACCESS.2024.3422428
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In deep learning (DL)-based human action recognition (HAR), considerable strides have been undertaken. Nevertheless, the precise classification of sports athletes' actions still needs to be completed. Primarily attributable to the exigency for exhaustive datasets about sports athletes' actions and the enduring quandaries imposed by variable camera perspectives, mercurial lighting conditions, and occlusions. This investigative endeavor thoroughly examines extant HAR datasets, furnishing a yardstick for gauging the efficacy of cutting-edge methodologies. In light of the paucity of accessible datasets delineating athlete actions, we have taken a proactive stance, endeavoring to curate two meticulously datasets tailored explicitly for sports athletes, subsequently scrutinizing their consequential impact on performance enhancement. While the superiority of 3D convolutional neural networks (3DCNN) over graph convolutional networks (GCN) in HAR is evident, it must be acknowledged that they entail a considerable computational overhead, particularly when confronted with voluminous datasets. Our inquiry introduces innovative methodologies and a more resource-efficient remedy for HAR, thereby alleviating the computational strain on the 3DCNN architecture. Consequently, it proffers a multifaceted approach towards augmenting HAR within the purview of surveillance cameras, bridging lacunae, surmounting computational impediments, and effectuating significant strides in the accuracy and efficacy of HAR frameworks.
引用
收藏
页码:93546 / 93567
页数:22
相关论文
共 84 条
  • [1] Two-stream spatiotemporal feature fusion for human action recognition
    Abdelbaky, Amany
    Aly, Saleh
    [J]. VISUAL COMPUTER, 2021, 37 (07) : 1821 - 1835
  • [2] Top-k team synergy problem: Capturing team synergy based on C3
    Afshar, Jafar
    Roudsari, Arousha Haghighian
    Lee, Wookey
    [J]. INFORMATION SCIENCES, 2022, 589 : 117 - 141
  • [3] Estimating Motion Codes from Demonstration Videos
    Alibayev, Maxat
    Paulius, David
    Sun, Yu
    [J]. 2020 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2020, : 4257 - 4262
  • [4] Review of deep learning: concepts, CNN architectures, challenges, applications, future directions
    Alzubaidi, Laith
    Zhang, Jinglan
    Humaidi, Amjad J.
    Al-Dujaili, Ayad
    Duan, Ye
    Al-Shamma, Omran
    Santamaria, J.
    Fadhel, Mohammed A.
    Al-Amidie, Muthana
    Farhan, Laith
    [J]. JOURNAL OF BIG DATA, 2021, 8 (01)
  • [5] DeepActsNet: A deep ensemble framework combining features from face, hands, and body for action recognition
    Asif, Umar
    Mehta, Deval
    Von Cavallar, Stefan
    Tang, Jianbin
    Harrer, Stefan
    [J]. PATTERN RECOGNITION, 2023, 139
  • [6] Bae K., 2023, arXiv
  • [7] How and What to Learn: Taxonomizing Self-Supervised Learning for 3D Action Recognition
    Ben Tanfous, Amor
    Zerroug, Aimen
    Linsley, Drew
    Serre, Thomas
    [J]. 2022 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV 2022), 2022, : 2888 - 2897
  • [8] Cai JL, 2023, Arxiv, DOI arXiv:2302.09790
  • [9] Cao Z, 2019, Arxiv, DOI arXiv:1812.08008
  • [10] Lightweight Long and Short-Range Spatial-Temporal Graph Convolutional Network for Skeleton-Based Action Recognition
    Chen, Hongbo
    Li, Menglei
    Jing, Lei
    Cheng, Zixue
    [J]. IEEE ACCESS, 2021, 9 : 161374 - 161382