3D Human Pose Estimation from multi-view thermal vision sensors

被引:5
作者
Lupion, Marcos [1 ]
Polo-Rodriguez, Aurora [2 ]
Medina-Quero, Javier [3 ]
Sanjuan, Juan F. [1 ]
Ortigosa, Pilar M. [1 ]
机构
[1] Univ Almeria, Dept Informat, CeIA3, Almeria 04120, Andalucia, Spain
[2] Univ Jaen, Dept Comp Sci, Campus Lagunillas, Jaen 23071, Andalucia, Spain
[3] Univ Granada, Higher Tech Sch Comp Engn & Telecommun, Dept Comp Engn Automat & Robot, E-18071 Granada, Andalucia, Spain
关键词
Thermal vision; 3D human pose estimation; Convolutional neural networks;
D O I
10.1016/j.inffus.2023.102154
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Human Pose Estimation from images allows the recognition of key daily activity patterns in Smart Environ-ments. Current State-of-the-art (SOTA) 3D pose estimators are built on visible spectrum images, which can lead to privacy concerns in Ambient Assisted Living solutions. Thermal Vision sensors are being deployed in these environments, as they preserve privacy and operate in low brightness conditions. Furthermore, multi-view setups provide the most accurate 3D pose estimation, as the occlusion problem is overcome by having images from different perspectives. Nevertheless, no solutions in the literature use thermal vision sensors following a multi-view scheme. In this work, a multi-view setup consisting of low-cost devices is deployed in the Smart Home of the University of Almeria. Thermal and visible images are paired using homography, and SOTA solutions such as YOLOv3 and Blazepose are used to annotate the bounding box and 2D pose in the thermal images. ThermalYOLO is built by fine-tuning YOLOv3 and outperforms YOLOv3 by 5% in bounding box recognition and by 1% in IoU value. Furthermore, InceptionResNetV2 is found as the most appropriate architecture for 2D pose estimation. Finally, a 3D pose estimator was built comparing input approaches and convolutional architectures. Results show that the most appropriate architecture is having three single-channel thermal images processed by independent convolutional backbones (ResNet50 in this case). After these, the output is fused with the 2D poses. The resulting convolutional neural network shows excellent behaviour when having occlusions,-view SOTA in the visible
引用
收藏
页数:15
相关论文
共 74 条
  • [1] Agarwal A., 2005, SURVEY PLANAR HOMOGR
  • [2] 2D Human Pose Estimation: New Benchmark and State of the Art Analysis
    Andriluka, Mykhaylo
    Pishchulin, Leonid
    Gehler, Peter
    Schiele, Bernt
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 3686 - 3693
  • [3] Deep 3D Body Landmarks Estimation for Smart Garments Design
    Baronetto, Annalisa
    Wassermann, Dominik
    Amft, Oliver
    [J]. 2021 IEEE 17TH INTERNATIONAL CONFERENCE ON WEARABLE AND IMPLANTABLE BODY SENSOR NETWORKS (BSN), 2021,
  • [4] Bartol K., 2020, INT C EXH 3D BOD SCA
  • [5] Bazarevsky V, 2020, Arxiv, DOI [arXiv:2006.10204, DOI 10.48550/ARXIV.2006.10204]
  • [6] A review of deep learning techniques for 2D and 3D human pose estimation
    Ben Gamra, Miniar
    Akhloufi, Moulay A.
    [J]. IMAGE AND VISION COMPUTING, 2021, 114
  • [7] Exploiting Spatial-temporal Relationships for 3D Pose Estimation via Graph Convolutional Networks
    Cai, Yujun
    Ge, Liuhao
    Liu, Jun
    Cai, Jianfei
    Cham, Tat-Jen
    Yuan, Junsong
    Thalmann, Nadia Magnenat
    [J]. 2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, : 2272 - 2281
  • [8] OpenPose: Realtime Multi-Person 2D Pose Estimation Using Part Affinity Fields
    Cao, Zhe
    Hidalgo, Gines
    Simon, Tomas
    Wei, Shih-En
    Sheikh, Yaser
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2021, 43 (01) : 172 - 186
  • [9] Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
    Cao, Zhe
    Simon, Tomas
    Wei, Shih-En
    Sheikh, Yaser
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1302 - 1310
  • [10] Deep Learning Based 2D Human Pose Estimation: A Survey
    Dang, Qi
    Yin, Jianqin
    Wang, Bin
    Zheng, Wenqing
    [J]. TSINGHUA SCIENCE AND TECHNOLOGY, 2019, 24 (06) : 663 - 676