DFA-Net: Multi-Scale Dense Feature-Aware Network via Integrated Attention for Unmanned Aerial Vehicle Infrared and Visible Image Fusion

被引:6
作者
Shen, Sen [1 ]
Li, Di [2 ]
Mei, Liye [2 ]
Xu, Chuan [2 ]
Ye, Zhaoyi [2 ]
Zhang, Qi [2 ]
Hong, Bo [3 ]
Yang, Wei [3 ]
Wang, Ying [3 ]
机构
[1] Naval Engn Univ, Sch Weap Engn, Wuhan 430032, Peoples R China
[2] Hubei Univ Technol, Sch Comp Sci, Wuhan 430068, Peoples R China
[3] Wuchang Shouyi Univ, Sch Informat Sci & Engn, Wuhan 430064, Peoples R China
关键词
infrared and visible fusion; unmanned aerial vehicles; image fusion; multi-scale feature; unsupervised gradient estimation; SHEARLET TRANSFORM; GRADIENT TRANSFER; DECOMPOSITION; PERFORMANCE; FRAMEWORK; ENHANCEMENT;
D O I
10.3390/drones7080517
中图分类号
TP7 [遥感技术];
学科分类号
081102 ; 0816 ; 081602 ; 083002 ; 1404 ;
摘要
Fusing infrared and visible images taken by an unmanned aerial vehicle (UAV) is a challenging task, since infrared images distinguish the target from the background by the difference in infrared radiation, while the low resolution also produces a less pronounced effect. Conversely, the visible light spectrum has a high spatial resolution and rich texture; however, it is easily affected by harsh weather conditions like low light. Therefore, the fusion of infrared and visible light has the potential to provide complementary advantages. In this paper, we propose a multi-scale dense feature-aware network via integrated attention for infrared and visible image fusion, namely DFA-Net. Firstly, we construct a dual-channel encoder to extract the deep features of infrared and visible images. Secondly, we adopt a nested decoder to adequately integrate the features of various scales of the encoder so as to realize the multi-scale feature representation of visible image detail texture and infrared image salient target. Then, we present a feature-aware network via integrated attention to further fuse the feature information of different scales, which can focus on specific advantage features of infrared and visible images. Finally, we use unsupervised gradient estimation and intensity loss to learn significant fusion features of infrared and visible images. In addition, our proposed DFA-Net approach addresses the challenges of fusing infrared and visible images captured by a UAV. The results show that DFA-Net achieved excellent image fusion performance in nine quantitative evaluation indexes under a low-light environment.
引用
收藏
页数:18
相关论文
共 50 条
[41]   HATF: Multi-Modal Feature Learning for Infrared and Visible Image Fusion via Hybrid Attention Transformer [J].
Liu, Xiangzeng ;
Wang, Ziyao ;
Gao, Haojie ;
Li, Xiang ;
Wang, Lei ;
Miao, Qiguang .
REMOTE SENSING, 2024, 16 (05)
[42]   FLFuse-Net: A fast and lightweight infrared and visible image fusion network via feature flow and edge compensation for salient information [J].
Weimin, Xue ;
Anhong, Wang ;
Lijun, Zhao .
INFRARED PHYSICS & TECHNOLOGY, 2022, 127
[43]   Modality specific infrared and visible image fusion based on multi-scale rich feature representation under low-light environment [J].
Liu, Chenhua ;
Chen, Hanrui ;
Deng, Lei ;
Guo, Chentong ;
Lu, Xitian ;
Yu, Heng ;
Zhu, Lianqing ;
Dong, Mingli .
INFRARED PHYSICS & TECHNOLOGY, 2024, 140
[44]   SMFD: an end-to-end infrared and visible image fusion model based on shared-individual multi-scale feature decomposition [J].
Xu, Mingrui ;
Kong, Jun ;
Jiang, Min ;
Liu, Tianshan .
JOURNAL OF APPLIED REMOTE SENSING, 2024, 18 (02) :22203
[45]   Infrared and visible image fusion method based on principal component analysis network and multi-scale morphological gradient [J].
Li, Shengshi ;
Zou, Yonghua ;
Wang, Guanjun ;
Lin, Cong .
INFRARED PHYSICS & TECHNOLOGY, 2023, 133
[46]   WaveFusionNet: Infrared and visible image fusion based on multi-scale feature encoder-decoder and discrete wavelet decomposition [J].
Liu, Renhe ;
Liu, Yu ;
Wang, Han ;
Du, Shan .
OPTICS COMMUNICATIONS, 2024, 573
[47]   Infrared and visible image fusion via saliency analysis and local edge-preserving multi-scale decomposition [J].
Zhang, Xiaoye ;
Ma, Yong ;
Fan, Fan ;
Zhang, Ying ;
Huang, Jun .
JOURNAL OF THE OPTICAL SOCIETY OF AMERICA A-OPTICS IMAGE SCIENCE AND VISION, 2017, 34 (08) :1400-1410
[48]   Infrared Polarization Image Fusion via Multi-Scale Sparse Representation and Pulse Coupled Neural Network [J].
Zhang, Jiajia ;
Zhou, Huixin ;
Wei, Shun ;
Tan, Wei .
AOPC 2019: OPTICAL SENSING AND IMAGING TECHNOLOGY, 2019, 11338
[49]   Single infrared image stripe removal via deep multi-scale dense connection convolutional neural network [J].
Xu, Kai ;
Zhao, Yaohong ;
Li, Fangzhou ;
Xiang, Wei .
INFRARED PHYSICS & TECHNOLOGY, 2022, 121
[50]   GTMFuse: Group-attention transformer-driven multiscale dense feature-enhanced network for infrared and visible image fusion [J].
Mei, Liye ;
Hu, Xinglong ;
Ye, Zhaoyi ;
Tang, Linfeng ;
Wang, Ying ;
Li, Di ;
Liu, Yan ;
Hao, Xin ;
Lei, Cheng ;
Xu, Chuan ;
Yang, Wei .
KNOWLEDGE-BASED SYSTEMS, 2024, 293