Safety Helmet Wearing Detection Model Based on Improved YOLO-M

被引:15
作者
Wang, Lili [1 ]
Zhang, Xinjie [1 ]
Yang, Hailu [1 ]
机构
[1] Harbin Univ Sci & Technol, Sch Comp Sci & Technol, Harbin 150080, Peoples R China
基金
中国国家自然科学基金; 黑龙江省自然科学基金;
关键词
Attention mechanism; feature fusion; safety helmet; YOLOv5s model;
D O I
10.1109/ACCESS.2023.3257183
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
In recent years, construction accidents have occurred frequently. The safety guarantee measures for construction personnel are thrown into sharp focus. Wearing helmets is one of the most important requirements to protect the safety of construction personnel, and the detection of wearing safety helmets has become necessary. For the problems of the existing helmet wearing detection algorithm such as too many parameters, substantial detection interferences, and low detection accuracy, in this paper a helmet wearing detection model YOLO-M is proposed. Firstly, MobileNetv3 is adopted as the backbone network of YOLOv5s for the feature extraction, which can reduce the number of model parameters and model size. Secondly, a residual edge is introduced in the feature fusion. The original feature map information is fused during feature fusion, and the detection ability of small targets is enhanced. At last, by changing the connection between CAM and SAM, a new attention module BiCAM is designed. The comparison experiments show that the detection accuracy of YOLO-M is 2.22% higher than YOLOv5s, and the model parameter quantity is reduced to 3/4 of YOLOv5s. Under the same detection conditions, the detection speed of YOLO-M is better than the other models, which meets the accuracy requirements of helmet detection in the construction scene.
引用
收藏
页码:26247 / 26257
页数:11
相关论文
共 20 条
[1]  
Ding T., 2022, Electron. Meas. Technol., V45, P72, DOI DOI 10.19651/J.CNKI.EMT.2209425
[2]   Fast helmet-wearing-condition detection based on improved YOLOv2 [J].
Fang M. ;
Sun T.-T. ;
Shao Z. .
Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2019, 27 (05) :1196-1205
[3]  
Girshick R., 2014, IEEE C COMPUTER VISI, P580, DOI DOI 10.1109/CVPR.2014.81
[4]   Searching for MobileNetV3 [J].
Howard, Andrew ;
Sandler, Mark ;
Chu, Grace ;
Chen, Liang-Chieh ;
Chen, Bo ;
Tan, Mingxing ;
Wang, Weijun ;
Zhu, Yukun ;
Pang, Ruoming ;
Vasudevan, Vijay ;
Le, Quoc V. ;
Adam, Hartwig .
2019 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2019), 2019, :1314-1324
[5]  
Hu J., 2018, P IEEE C COMP VIS PA, P7132
[6]  
Jun W. C., 2019, COMPUT SYST APPL, V28, P174, DOI [10.15888/j.cnki.csa.007065.[4]W., DOI 10.15888/J.CNKI.CSA.007065.[4]W]
[7]  
[李启月 Li Qiyue], 2021, [中国安全生产科学技术, Journal of Safety Science and Technology], V17, P182
[8]   Path Aggregation Network for Instance Segmentation [J].
Liu, Shu ;
Qi, Lu ;
Qin, Haifang ;
Shi, Jianping ;
Jia, Jiaya .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8759-8768
[9]   SSD: Single Shot MultiBox Detector [J].
Liu, Wei ;
Anguelov, Dragomir ;
Erhan, Dumitru ;
Szegedy, Christian ;
Reed, Scott ;
Fu, Cheng-Yang ;
Berg, Alexander C. .
COMPUTER VISION - ECCV 2016, PT I, 2016, 9905 :21-37
[10]   Light-YOLOv4: An Edge-Device Oriented Target Detection Method for Remote Sensing Images [J].
Ma, Xiaojie ;
Ji, Kefeng ;
Xiong, Boli ;
Zhang, Linbin ;
Feng, Sijia ;
Kuang, Gangyao .
IEEE JOURNAL OF SELECTED TOPICS IN APPLIED EARTH OBSERVATIONS AND REMOTE SENSING, 2021, 14 :10808-10820