Research on Fire Smoke Detection Algorithm Based on Improved YOLOv8

被引:4
作者
Zhang, Tianxin [1 ]
Wang, Fuwei [1 ]
Wang, Weimin [1 ]
Zhao, Qihao [1 ]
Ning, Weijun [1 ]
Wu, Haodong [1 ]
机构
[1] Liaoning Petrochem Univ, Sch Artificial Intelligence & Software, Fushun 113005, Peoples R China
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Feature extraction; Accuracy; YOLO; Detection algorithms; Convolutional neural networks; Forestry; Attention mechanisms; Fire detection; YOLOv8; EMA; PAN-Bag;
D O I
10.1109/ACCESS.2024.3448608
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Fire has consistently posed a significant disaster risk worldwide. Current fire detection methods primarily rely on traditional physical sensors such as light, smoke, and temperature detectors, which often struggle in complex environments. The susceptibility of existing fire detection technologies to background interference frequently results in false alarms, missed detections, and low detection accuracy. To address these issues, this paper proposes a fire detection algorithm based on an improved YOLOv8 model. First, to enhance the detection capabilities for large-scale fire and smoke targets, a large target detection head is added to the backbone of the YOLOv8 model. This modification enhances the network's receptive field, allowing it to capture a broader range of contextual information and identify fires over extensive areas. Secondly, an efficient multi-scale attention mechanism, EMA (Efficient Multi-Scale Attention Module), based on cross-space learning is integrated into the FPN (Feature Pyramid Network) part of the model. This mechanism highlights target features while suppressing background interference. Additionally, a PAN-Bag (Path Aggregation Network Bag) structure is proposed to help the model more accurately detect objects such as fire and smoke, which have uneven feature distributions and variable morphologies. With these improvements, we introduce the YOLOv8-FEP algorithm, which offers higher detection accuracy. Experimental results demonstrate that the YOLOv8-FEP algorithm improves the mAP by 3.1% and the accuracy by 5.8% compared to the original YOLOv8 algorithm, proving the effectiveness of the enhanced algorithm.
引用
收藏
页码:117354 / 117362
页数:9
相关论文
共 27 条
  • [1] Early Wildfire Smoke Detection Using Different YOLO Models
    Al-Smadi, Yazan
    Alauthman, Mohammad
    Al-Qerem, Ahmad
    Aldweesh, Amjad
    Quaddoura, Ruzayn
    Aburub, Faisal
    Mansour, Khalid
    Alhmiedat, Tareq
    [J]. MACHINES, 2023, 11 (02)
  • [2] SmokeNet: Satellite Smoke Scene Detection Using Convolutional Neural Network with Spatial and Channel-Wise Attention
    Ba, Rui
    Chen, Chen
    Yuan, Jing
    Song, Weiguo
    Lo, Siuming
    [J]. REMOTE SENSING, 2019, 11 (14)
  • [3] Information-Guided Flame Detection Based on Faster R-CNN
    Chaoxia, Chenyu
    Shang, Weiwei
    Zhang, Fei
    [J]. IEEE ACCESS, 2020, 8 : 58923 - 58932
  • [4] Wildland Fire Detection and Monitoring Using a Drone-Collected RGB/IR Image Dataset
    Chen, Xiwen
    Hopkins, Bryce
    Wang, Hao
    O'Neill, Leo
    Afghah, Fatemeh
    Razi, Abolfazl
    Fule, Peter
    Coen, Janice
    Rowell, Eric
    Watts, Adam
    [J]. IEEE ACCESS, 2022, 10 : 121301 - 121317
  • [5] Daasan M. J. A., 2023, P AS SIM C, P242
  • [6] Fast R-CNN
    Girshick, Ross
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1440 - 1448
  • [7] Rich feature hierarchies for accurate object detection and semantic segmentation
    Girshick, Ross
    Donahue, Jeff
    Darrell, Trevor
    Malik, Jitendra
    [J]. 2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2014, : 580 - 587
  • [8] Coordinate Attention for Efficient Mobile Network Design
    Hou, Qibin
    Zhou, Daquan
    Feng, Jiashi
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 13708 - 13717
  • [9] Multi-Scale Prediction For Fire Detection Using Convolutional Neural Network
    Jeon, Myeongho
    Choi, Han-Soo
    Lee, Junho
    Kang, Myungjoo
    [J]. FIRE TECHNOLOGY, 2021, 57 (05) : 2533 - 2551
  • [10] Jia J., 2019, P 4 INT C IM VIS COM, P329