A Mobile LiDAR-Based Deep Learning Approach for Real-Time 3D Body Measurement

被引:0
作者
Jeong, Yongho [1 ]
Noh, Taeuk [1 ]
Lee, Yonghak [1 ]
Lee, Seonjae [1 ]
Choi, Kwangil [1 ]
Jeong, Sujin [1 ]
Kim, Sunghwan [1 ]
机构
[1] Konkuk Univ, Dept Appl Stat, Seoul 05029, South Korea
来源
APPLIED SCIENCES-BASEL | 2025年 / 15卷 / 04期
基金
新加坡国家研究基金会;
关键词
LiDAR; HRNet; deep learning; keypoint;
D O I
10.3390/app15042001
中图分类号
O6 [化学];
学科分类号
0703 ;
摘要
In this study, we propose a solution for automatically measuring body circumferences by utilizing the built-in LiDAR sensor in mobile devices. Traditional body measurement methods mainly rely on 2D images or manual measurements. This research, however, utilizes 3D depth information to enhance both accuracy and efficiency. By employing HRNet-based keypoint detection and transfer learning through deep learning, the precise locations of body parts are identified and combined with depth maps to automatically calculate body circumferences. Experimental results demonstrate that the proposed method exhibits a relative error of up to 8% for major body parts such as waist, chest, hip, and buttock circumferences, with waist and buttock measurements recording low error rates below 4%. Although some models showed error rates of 7.8% and 7.4% in hip circumference measurements, this was attributed to the complexity of 3D structures and the challenges in selecting keypoint locations. Additionally, the use of depth map-based keypoint correction and regression analysis significantly improved accuracy compared to conventional 2D-based measurement methods. The real-time processing speed was also excellent, ensuring stable performance across various body types.
引用
收藏
页数:33
相关论文
共 24 条
  • [1] Jeong H., Moon H., Jeong Y., Kwon H., Kim C., Lee Y., Yang S., Kim S., Automated Technology for Strawberry Size Measurement and Weight Prediction Using AI, IEEE Access, 12, pp. 14157-14167, (2024)
  • [2] Heo S.M., Jung S., Kwak H., Jeong Y., Yang S., Lee Y., Kim S., Dental Image Data Generation for Instance Segmentation using Generative Adversarial Networks, Quant. Bio-Sci, 42, pp. 111-121, (2023)
  • [3] Kim S., Heo S.M., Yang S., Kim Y., Han J., Jung S., Instance segmentation guided by weight map with application to tooth boundary detection, Quant. Bio-Sci, 39, pp. 159-167, (2020)
  • [4] Shapiro L., Computer Vision and Image Processing, (1992)
  • [5] Szeliski R., Computer Vision: Algorithms and Applications, (2022)
  • [6] Logothetis N.K., Sheinberg D.L., Visual object recognition, Annu. Rev. Neurosci, 19, pp. 577-621, (1996)
  • [7] Minaee S., Boykov Y., Porikli F., Plaza A., Kehtarnavaz N., Terzopoulos D., Image Segmentation Using Deep Learning: A Survey, IEEE Trans. Pattern Anal. Mach. Intell, 44, pp. 3523-3542, (2022)
  • [8] Collis R., Lidar, Appl. Opt, 9, pp. 1782-1788, (1970)
  • [9] Loper M., Mahmood N., Romero J., Pons-Moll G., Black M.J., SMPL: A skinned multi-person linear model, Seminal Graphics Papers: Pushing the Boundaries, 2, pp. 851-866, (2023)
  • [10] Saito S., Huang Z., Natsume R., Morishima S., Kanazawa A., Li H., Pifu: Pixel-aligned implicit function for high-resolution clothed human digitization, Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 2304-2314