Multi-Spectral Fusion using Generative Adversarial Networks for UAV Detection of Wild Fires

被引:4
作者
Kacker, Tanmay [1 ]
Perrusquia, Adolfo [2 ]
Guo, Weisi [2 ]
机构
[1] Cranfield Univ, Sch Aerosp Transport Mfg, Bedford, England
[2] Cranfield Univ, Ctr Auto Cyberphys Syst, Bedford, England
来源
2023 INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE IN INFORMATION AND COMMUNICATION, ICAIIC | 2023年
关键词
fire detection; deep learning; UAV; drone; GAN;
D O I
10.1109/ICAIIC57133.2023.10067042
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Wild fires are now increasingly responsible for immense ecological damage. Unmanned aerials vehicles (UAVs) are being used for monitoring and early-detection of wild fires. Recently, significant research has been conducted for using Deep Learning (DL) vision models for fire and smoke segmentation. Such models predominantly use images from the visible spectrum, which are operationally prone to large false-positive rates and sub-optimal performance across environmental conditions. In comparison, fire detection using infrared (IR) images has shown to be robust to lighting and environmental variations, but long range IR sensors remain expensive. There is an increasing interest in the fusion of visible and IR images since a fused representation would combine the visual as well as thermal information of the image. This yields significant benefits especially towards reducing false positive scenarios and increasing robustness of the model. However, the impact of fusion of the two spectrum on the performance of fire segmentation has not been extensively investigated. In this paper, we assess multiple image fusion techniques and evaluate the performance of a U-Net based segmentation model on each of the three image representations visible, IR and fused. We also identify subsets of fire classes that are observed to have better results using the fused representation.
引用
收藏
页码:182 / 187
页数:6
相关论文
共 23 条
[1]   Wildland fires detection and segmentation using deep learning [J].
Akhloufi, Moulay A. ;
Tokime, Roger Booto ;
Elassady, Hassan .
PATTERN RECOGNITION AND TRACKING XXIX, 2018, 10649
[2]   Deep learning methods for solving linear inverse problems: Research directions and paradigms [J].
Bai, Yanna ;
Chen, Wei ;
Chen, Jie ;
Guo, Weisi .
SIGNAL PROCESSING, 2020, 177 (177)
[3]   In-Place Activated BatchNorm for Memory-Optimized Training of DNNs [J].
Bulo, Samuel Rota ;
Porzi, Lorenzo ;
Kontschieder, Peter .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :5639-5647
[4]  
Celik T., 2006, IEEE EUR SIGN PROC C
[5]  
Chen LC, 2016, Arxiv, DOI [arXiv:1412.7062, 10.48550/arXiv.1412.7062]
[6]  
Chen TH, 2004, IEEE IMAGE PROC, P1707
[7]   FIRe-GAN: a novel deep learning-based infrared-visible fusion method for wildfire imagery [J].
Ciprian-Sanchez, J. F. ;
Ochoa-Ruiz, G. ;
Gonzalez-Mendoza, M. ;
Rossi, L. .
NEURAL COMPUTING & APPLICATIONS, 2023, 35 (25) :18201-18213
[8]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[9]  
Li H, 2018, INT C PATT RECOG, P2705, DOI 10.1109/ICPR.2018.8546006
[10]   Automatic Quantification of Settlement Damage using Deep Learning of Satellite Images [J].
Lu, Lili ;
Guo, Weisi .
2021 IEEE INTERNATIONAL SMART CITIES CONFERENCE (ISC2), 2021,