Challenges in Video-Based Infant Action Recognition: A Critical Examination of the State of the Art

被引:3
作者
Hatamimajoumerd, Elaheh [1 ,2 ]
Kakhaki, Pooria Daneshvar [1 ]
Huang, Xiaofei [1 ]
Luan, Lingfei [3 ]
Amraee, Somaieh [1 ,2 ]
Ostadabbas, Sarah [1 ]
机构
[1] Northeastern Univ, Dept Elect & Comp Engn, Boston, MA 02115 USA
[2] Northeastern Univ, Roux Inst, Portland, ME USA
[3] Univ Minnesota, Minneapolis, MN USA
来源
2024 IEEE WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION WORKSHOPS, WACVW 2024 | 2024年
关键词
D O I
10.1109/WACVW60836.2024.00010
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Automated human action recognition, a burgeoning field within computer vision, boasts diverse applications spanning surveillance, security, human-computer interaction, tele-health, and sports analysis. Precise action recognition in infants serves a multitude of pivotal purposes, encompassing safety monitoring, developmental milestone tracking, early intervention for developmental delays, fostering parent-infant bonds, advancing computer-aided diagnostics, and contributing to the scientific comprehension of child development. This paper delves into the intricacies of infant action recognition, a domain that has remained relatively uncharted despite the accomplishments in adult action recognition. In this study, we introduce a ground-breaking dataset called "InfActPrimitive", encompassing five significant infant milestone action categories, and we incorporate specialized preprocessing for infant data. We conducted an extensive comparative analysis employing cutting-edge skeleton-based action recognition models using this dataset. Our findings reveal that, although the PoseC3D model achieves the highest accuracy at approximately 71%, the remaining models struggle to accurately capture the dynamics of infant actions. This highlights a substantial knowledge gap between infant and adult action recognition domains and the urgent need for data-efficient pipeline models
引用
收藏
页码:21 / 30
页数:10
相关论文
共 32 条
  • [1] A Person- and Time-Varying Vector Autoregressive Model to Capture Interactive Infant-Mother Head Movement Dynamics
    Chen, Meng
    Chow, Sy-Miin
    Hammal, Zakia
    Messinger, Daniel S.
    Cohn, Jeffrey F.
    [J]. MULTIVARIATE BEHAVIORAL RESEARCH, 2021, 56 (05) : 739 - 767
  • [2] InfoGCN: Representation Learning for Human Skeleton-based Action Recognition
    Chi, Hyung-gun
    Ha, Myoung Hoon
    Chi, Seunggeun
    Lee, Sang Wan
    Huang, Qixing
    Ramani, Karthik
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 20154 - 20164
  • [3] BabyNet: A Lightweight Network for Infant Reaching Action Recognition in Unconstrained Environments to Support Future Pediatric Rehabilitation Applications
    Dechemi, Amel
    Bhakri, Vikarn
    Sahin, Ipsita
    Modi, Arjun
    Mestas, Julya
    Peiris, Pamodya
    Barrundia, Dannya Enriquez
    Kokkoni, Elena
    Karydis, Konstantinos
    [J]. 2021 30TH IEEE INTERNATIONAL CONFERENCE ON ROBOT AND HUMAN INTERACTIVE COMMUNICATION (RO-MAN), 2021, : 461 - 467
  • [4] Revisiting Skeleton-based Action Recognition
    Duan, Haodong
    Zhao, Yue
    Chen, Kai
    Lin, Dahua
    Dai, Bo
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 2959 - 2968
  • [5] Fuentefria RD, 2017, J PEDIAT-BRAZIL, V93, P328
  • [6] Hammal Z, 2017, INT CONF AFFECT, P216, DOI 10.1109/ACII.2017.8273603
  • [7] Huang X., 2023, P IEEECVF C COMPUTER, P4911
  • [8] Computer Vision to the Rescue: Infant Postural Symmetry Estimation from Incongruent Annotations
    Huang, Xiaofei
    Wan, Michael
    Luan, Lingfei
    Tunik, Bethany
    Ostadabbas, Sarah
    [J]. 2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV), 2023, : 1909 - 1917
  • [9] Invariant Representation Learning for Infant Pose Estimation with Small Data
    Huang, Xiaofei
    Fu, Nihang
    Liu, Shuangjun
    Ostadabbas, Sarah
    [J]. 2021 16TH IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION (FG 2021), 2021,
  • [10] Huang Xiaofei, 2022, WORKSH INT C PATT RE