2D Human pose estimation: a survey

被引:0
|
作者
Haoming Chen
Runyang Feng
Sifan Wu
Hao Xu
Fengcheng Zhou
Zhenguang Liu
机构
[1] Zhejiang Gongshang University,
[2] Zhejiang Lab,undefined
来源
Multimedia Systems | 2023年 / 29卷
关键词
Human pose estimation; Pose estimation; Survey; Deep learning; Convolutional neural network;
D O I
暂无
中图分类号
学科分类号
摘要
Human pose estimation aims at localizing human anatomical keypoints or body parts in the input data (e.g., images, videos, or signals). It forms a crucial component in enabling machines to have an insightful understanding of the behaviors of humans, and has become a salient problem in computer vision and related fields. Deep learning techniques allow learning feature representations directly from the data, significantly pushing the performance boundary of human pose estimation. In this paper, we reap the recent achievements of 2D human pose estimation methods and present a comprehensive survey. Briefly, existing approaches put their efforts in three directions, namely network architecture design, network training refinement, and post processing. Network architecture design looks at the architecture of human pose estimation models, extracting more robust features for keypoint recognition and localization. Network training refinement tap into the training of neural networks and aims to improve the representational ability of models. Post processing further incorporates model-agnostic polishing strategies to improve the performance of keypoint detection. More than 200 research contributions are involved in this survey, covering methodological frameworks, common benchmark datasets, evaluation metrics, and performance comparisons. We seek to provide researchers with a more comprehensive and systematic review on human pose estimation, allowing them to acquire a grand panorama and better identify future directions.
引用
收藏
页码:3115 / 3138
页数:23
相关论文
共 50 条
  • [1] 2D Human pose estimation: a survey
    Chen, Haoming
    Feng, Runyang
    Wu, Sifan
    Xu, Hao
    Zhou, Fengcheng
    Liu, Zhenguang
    MULTIMEDIA SYSTEMS, 2023, 29 (05) : 3115 - 3138
  • [2] The Progress of Human Pose Estimation: A Survey and Taxonomy of Models Applied in 2D Human Pose Estimation
    Munea, Tewodros Legesse
    Jembre, Yalew Zelalem
    Weldegebriel, Halefom Tekle
    Chen, Longbiao
    Huang, Chenxi
    Yang, Chenhui
    IEEE ACCESS, 2020, 8 : 133330 - 133348
  • [3] Deep Learning Based 2D Human Pose Estimation:A Survey
    Qi Dang
    Jianqin Yin
    Bin Wang
    Wenqing Zheng
    Tsinghua Science and Technology, 2019, 24 (06) : 663 - 676
  • [4] Deep Learning Based 2D Human Pose Estimation: A Survey
    Dang, Qi
    Yin, Jianqin
    Wang, Bin
    Zheng, Wenqing
    TSINGHUA SCIENCE AND TECHNOLOGY, 2019, 24 (06) : 663 - 676
  • [5] A Survey on Deep Learning-Based 2D Human Pose Estimation Models
    Salisu S.
    Mohamed A.S.A.
    Jaafar M.H.
    Pauzi A.S.B.
    Younis H.A.
    Computers, Materials and Continua, 2023, 76 (02) : 2385 - 2400
  • [6] A Survey on Deep Learning-Based 2D Human Pose Estimation Models
    Salisu, Sani
    Mohamed, A. S. A.
    Jaafar, M. H.
    Pauzi, Ainun S. B.
    Younis, Hussain A.
    CMC-COMPUTERS MATERIALS & CONTINUA, 2023, 76 (02): : 2385 - 2400
  • [7] EvoPose2D: Pushing the Boundaries of 2D Human Pose Estimation Using Accelerated Neuroevolution With Weight Transfer
    McNally, William
    Vats, Kanav
    Wong, Alexander
    McPhee, John
    IEEE ACCESS, 2021, 9 : 139403 - 139414
  • [8] A comprehensive survey on 2D multi-person pose estimation methods
    Wang, Chen
    Zhang, Feng
    Ge, Shuzhi Sam
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2021, 102
  • [9] 3D Human Pose Estimation Using Convolutional Neural Networks with 2D Pose Information
    Park, Sungheon
    Hwang, Jihye
    Kwak, Nojun
    COMPUTER VISION - ECCV 2016 WORKSHOPS, PT III, 2016, 9915 : 156 - 169
  • [10] UniPose plus : A Unified Framework for 2D and 3D Human Pose Estimation in Images and Videos
    Artacho, Bruno
    Savakis, Andreas
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (12) : 9641 - 9653