Intelligent segmentation of wildfire region and interpretation of fire front in visible light images from the viewpoint of an unmanned aerial vehicle (UAV)

被引:0
|
作者
Li, Jianwei [1 ]
Wan, Jiali [1 ]
Sun, Long [2 ]
Hu, Tongxin [2 ]
Li, Xingdong [3 ]
Zheng, Huiru [4 ]
机构
[1] Fuzhou Univ, Coll Phys & Informat Engn, Fuzhou 350116, Peoples R China
[2] Northeast Forestry Univ, Coll Forestry, Key Lab Sustainable Forest Ecosyst Management, Harbin 150040, Peoples R China
[3] Northeast Forestry Univ, Coll Mech & Elect Engn, 26 Hexing Rd, Harbin 150040, Peoples R China
[4] Ulster Univ, Sch Comp, Belfast BT15 1ED, North Ireland
基金
中国博士后科学基金;
关键词
Attention mechanism; Convolutional neural network; Deep learning; Wildfire segmentation; Fire front interpretation; Unmanned aerial vehicle; ALGORITHM; SPREAD; YOLO;
D O I
10.1016/j.isprsjprs.2024.12.025
中图分类号
P9 [自然地理学];
学科分类号
0705 ; 070501 ;
摘要
The acceleration of global warming and intensifying global climate anomalies have led to a rise in the frequency of wildfires. However, most existing research on wildfire fields focuses primarily on wildfire identification and prediction, with limited attention given to the intelligent interpretation of detailed information, such as fire front within fire region. To address this gap, advance the analysis of fire front in UAV-captured visible images, and facilitate future calculations of fire behavior parameters, a new method is proposed for the intelligent segmentation and fire front interpretation of wildfire regions. This proposed method comprises three key steps: deep learning-based fire segmentation, boundary tracking of wildfire regions, and fire front interpretation. Specifically, the YOLOv7-tiny model is enhanced with a Convolutional Block Attention Module (CBAM), which integrates channel and spatial attention mechanisms to improve the model's focus on wildfire regions and boost the segmentation precision. Experimental results show that the proposed method improved detection and segmentation precision by 3.8 % and 3.6 %, respectively, compared to existing approaches, and achieved an average segmentation frame rate of 64.72 Hz, which is well above the 30 Hz threshold required for real-time fire segmentation. Furthermore, the method's effectiveness in boundary tracking and fire front interpreting was validated using an outdoor grassland fire fusion experiment's real fire image data. Additional tests were conducted in southern New South Wales, Australia, using data that confirmed the robustness of the method in accurately interpreting the fire front. The findings of this research have potential applications in dynamic data-driven forest fire spread modeling and fire digital twinning areas. The code and dataset are publicly available at https://github.com/makemoneyokk/fire-segmentation-interpretation.git.
引用
收藏
页码:473 / 489
页数:17
相关论文
共 33 条
  • [31] Mapping Heterogeneous Urban Landscapes from the Fusion of Digital Surface Model and Unmanned Aerial Vehicle-Based Images Using Adaptive Multiscale Image Segmentation and Classification
    Gibril, Mohamed Barakat A.
    Kalantar, Bahareh
    Al-Ruzouq, Rami
    Ueda, Naonori
    Saeidi, Vahideh
    Shanableh, Abdallah
    Mansor, Shattri
    Shafri, Helmi Z. M.
    REMOTE SENSING, 2020, 12 (07)
  • [32] Deep learning based banana plant detection and counting using high-resolution red-green-blue (RGB) images collected from unmanned aerial vehicle (UAV)
    Neupane, Bipul
    Horanont, Teerayut
    Nguyen Duy Hung
    PLOS ONE, 2019, 14 (10):
  • [33] Synergizing a Deep Learning and Enhanced Graph-Partitioning Algorithm for Accurate Individual Rubber Tree-Crown Segmentation from Unmanned Aerial Vehicle Light-Detection and Ranging Data
    Zhu, Yunfeng
    Lin, Yuxuan
    Chen, Bangqian
    Yun, Ting
    Wang, Xiangjun
    REMOTE SENSING, 2024, 16 (15)