MobiRFPose: Portable RF-Based 3D Human Pose Camera

被引:1
作者
Yu, Cong [1 ]
Zhang, Dongheng [2 ]
Wu, Zhi [2 ]
Xie, Chunyang [3 ]
Lu, Zhi [2 ]
Hu, Yang [4 ]
Chen, Yan [2 ]
机构
[1] China Acad Engn Phys, Inst Elect Engn, Mianyang 621900, Peoples R China
[2] Univ Sci & Technol China, Sch Cyber Sci & Technol, Hefei 230026, Peoples R China
[3] Univ Elect Sci & Technol China, Sch Informat & Commun Engn, Chengdu 611731, Peoples R China
[4] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230026, Peoples R China
基金
中国国家自然科学基金;
关键词
RF signals; Three-dimensional displays; Antenna arrays; Pose estimation; Radio frequency; Cameras; Computational modeling; Human pose estimation; lightweight model; wireless sensing;
D O I
10.1109/TMM.2023.3314979
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Existing RF-based human pose estimation methods usually require intensive computations and cannot meet the real-time processing and portability requirements for mobile devices. To tackle the limitation, in this article, we introduce a lightweight RF-based pose estimation model, i.e., MobiRFPose, to construct the portable RF-based pose camera. Different from traditional optical-based cameras, the RF-based camera does not capture visual information, which means the privacy-preserving characteristic. Specifically, we only utilize a horizontal antenna array to transceive RF signals, then estimate the human locations on the RF signal heatmap and crop the human location regions, and finally estimate the fine-grained human poses based on the cropped small RF signal heatmaps. To evaluate the performance, we compare MobiRFPose with state-of-the-art methods. Experimental results demonstrate that MobiRFPose can achieve accurate 3D human pose estimation with fewer parameters and computations. We also test the trained MobiRFPose model using mobile computing devices, where the model structures and parameters only take up 268 KB and 3226 KB of disk space, and MobiRFPose can achieve 66 FPS processing speed. The pose estimation error is 11.05 cm in the case of a single person and 11.29 cm in the case of multiple people. All experimental results indicate that our proposed method can construct a portable RF camera to estimate human poses accurately.
引用
收藏
页码:3715 / 3727
页数:13
相关论文
共 57 条
  • [1] MoveNet: A Deep Neural Network for Joint Profile Prediction Across Variable Walking Speeds and Slopes
    Bajpai, Rishabh
    Joshi, Deepak
    [J]. IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
  • [2] Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields
    Cao, Zhe
    Simon, Tomas
    Wei, Shih-En
    Sheikh, Yaser
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 1302 - 1310
  • [3] Chen J., 2022, IEEE Trans. Mobile Comput., early acce, Oc., V14, DOI [10.1109/TMC20223214721, DOI 10.1109/TMC20223214721]
  • [4] SpeedNet: Indoor Speed Estimation With Radio Signals
    Chen, Yan
    Deng, Hongyu
    Zhang, Dongheng
    Hu, Yang
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2021, 8 (04) : 2762 - 2774
  • [5] Residual Carrier Frequency Offset Estimation and Compensation for Commodity WiFi
    Chen, Yan
    Su, Xiang
    Hu, Yang
    Zeng, Bing
    [J]. IEEE TRANSACTIONS ON MOBILE COMPUTING, 2020, 19 (12) : 2891 - 2902
  • [6] RMPE: Regional Multi-Person Pose Estimation
    Fang, Hao-Shu
    Xie, Shuqin
    Tai, Yu-Wing
    Lu, Cewu
    [J]. 2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, : 2353 - 2362
  • [7] Howard AG, 2017, Arxiv, DOI arXiv:1704.04861
  • [8] Geng JQ, 2022, Arxiv, DOI [arXiv:2301.00250, DOI 10.48550/ARXIV.2301.00250]
  • [9] Energy Optimization and QoE Satisfaction for Wireless Visual Sensor Networks in Multi Target Tracking Scenario
    Ghazalian, Reza
    Aghagolzadeh, Ali
    Andargoli, Seyed Mehdi Hosseini
    [J]. IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 : 823 - 834
  • [10] Enabling Heterogeneous Connectivity in Internet of Things: A Time-Reversal Approach
    Han, Yi
    Chen, Yan
    Wang, Beibei
    Liu, K. J. Ray
    [J]. IEEE INTERNET OF THINGS JOURNAL, 2016, 3 (06): : 1036 - 1047