PIAFusion: A progressive infrared and visible image fusion network based on illumination aware

被引:523
作者
Tang, Linfeng [1 ]
Yuan, Jiteng [1 ]
Zhang, Hao [1 ]
Jiang, Xingyu [1 ]
Ma, Jiayi [1 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
关键词
Image fusion; Illumination aware; Cross-modality differential aware fusion; Deep learning; PERFORMANCE; GRADIENT; NEST;
D O I
10.1016/j.inffus.2022.03.007
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Infrared and visible image fusion aims to synthesize a single fused image containing salient targets and abun-dant texture details even under extreme illumination conditions. However, existing image fusion algorithms fail to take the illumination factor into account in the modeling process. In this paper, we propose a progressive image fusion network based on illumination-aware, termed as PIAFusion, which adaptively maintains the intensity distribution of salient targets and preserves texture information in the background. Specifically, we design an illumination-aware sub-network to estimate the illumination distribution and calculate the illumination probability. Moreover, we utilize the illumination probability to construct an illumination-aware loss to guide the training of the fusion network. The cross-modality differential aware fusion module and halfway fusion strategy completely integrate common and complementary information under the constraint of illumination-aware loss. In addition, a new benchmark dataset for infrared and visible image fusion, i.e., Multi-Spectral Road Scenarios (available at https://github.com/Linfeng-Tang/MSRS), is released to support network training and comprehensive evaluation. Extensive experiments demonstrate the superiority of our method over state-of-the-art alternatives in terms of target maintenance and texture preservation. Particularly, our progressive fusion framework could round-the-clock integrate meaningful information from source images according to illumination conditions. Furthermore, the application to semantic segmentation demonstrates the potential of our PIAFusion for high-level vision tasks. Our codes will be available at https://github.com/ Linfeng-Tang/PIAFusion.
引用
收藏
页码:79 / 92
页数:14
相关论文
共 56 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Abdar M., 2020, INFORM FUSION
[3]  
[Anonymous], neural networks
[4]   Pedestrian detection with unsupervised multispectral feature learning using deep neural networks [J].
Cao, Yanpeng ;
Guan, Dayan ;
Huang, Weilin ;
Yang, Jiangxin ;
Cao, Yanlong ;
Qiao, Yu .
INFORMATION FUSION, 2019, 46 :206-217
[5]   Infrared and visible image fusion based on target-enhanced multiscale transform decomposition [J].
Chen, Jun ;
Li, Xuejiao ;
Luo, Linbo ;
Mei, Xiaoguang ;
Ma, Jiayi .
INFORMATION SCIENCES, 2020, 508 :64-78
[6]   Fusion of multispectral and panchromatic satellite images using the curvelet transform [J].
Choi, M ;
Kim, RY ;
Nam, MR ;
Kim, HO .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2005, 2 (02) :136-140
[7]   StarGAN: Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation [J].
Choi, Yunjey ;
Choi, Minje ;
Kim, Munyoung ;
Ha, Jung-Woo ;
Kim, Sunghun ;
Choo, Jaegul .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :8789-8797
[8]   Region-based multimodal image fusion using ICA bases [J].
Cvejic, Nedeljko ;
Bull, David ;
Canagarajah, Nishan .
IEEE SENSORS JOURNAL, 2007, 7 (5-6) :743-751
[9]   Infrared and visible images fusion based on RPCA and NSCT [J].
Fu, Zhizhong ;
Wang, Xue ;
Xu, Jin ;
Zhou, Ning ;
Zhao, Yufei .
INFRARED PHYSICS & TECHNOLOGY, 2016, 77 :114-123
[10]   Fusion of multispectral data through illumination-aware deep neural networks for pedestrian detection [J].
Guan, Dayan ;
Cao, Yanpeng ;
Yang, Jiangxin ;
Cao, Yanlong ;
Yang, Michael Ying .
INFORMATION FUSION, 2019, 50 :148-157