ADF-Net: Attention-guided deep feature decomposition network for infrared and visible image fusion

被引:0
作者
Shen, Sen [1 ]
Zhang, Taotao [1 ]
Dong, Haidi [1 ]
Yuan, ShengZhi [1 ]
Li, Min [1 ]
Xiao, RenKai [1 ]
Zhang, Xiaohui [1 ]
机构
[1] Naval Engn Univ, Sch Weap Engn, Wuhan 430032, Peoples R China
关键词
computer vision; image fusion; GRADIENT TRANSFER;
D O I
10.1049/ipr2.13134
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
To effectively enhance the ability to acquire information by making full use of the complementary features of infrared and visible images, the widely used image fusion algorithm is faced with challenges such as information loss and image blurring. In response to this issue, the authors propose a dual-branch deep hierarchical fusion network (ADF-Net) guided by an attention mechanism. Initially, the attention convolution module extracts the shallow features of the image. Subsequently, a dual-branch deep decomposition feature extractor is introduced, where in the transformer encoder block (TEB) employs remote attention to process low-frequency global features, while the CNN encoder block (CEB) extracts high-frequency local information. Ultimately, the global fusion layer based on TEB and the local fusion layer based on CEB produce the fused image through the encoder. Multiple experiments demonstrate that ADF-Net excels in various aspects by utilizing two-stage training and an appropriate loss function for training and testing. In this study, an attention-guided dual-branch deep decomposition image fusion network is proposed to achieve end-to-end infrared and image fusion. The depth feature extraction and fusion of infrared and visible images are achieved by the attention convolution module, the dual-branch depth decomposition module, and the basic and detail feature fusion strategy. image
引用
收藏
页码:2774 / 2787
页数:14
相关论文
共 41 条
[1]   THFuse: An infrared and visible image fusion network using transformer and hybrid feature extractor [J].
Chen, Jun ;
Ding, Jianfeng ;
Yu, Yang ;
Gong, Wenping .
NEUROCOMPUTING, 2023, 527 :71-82
[2]  
Dinh L., 2016, CoRR
[3]  
Dosovitskiy A., 2020, IMAGE IS WORTH 1616
[4]   mInfrared and Visible Image Fusion Based on Two-Scale Decomposition and Saliency Extraction [J].
Feng Xin ;
Fang Chao ;
Gong Hai-feng ;
Lou Xi-cheng ;
Peng Ye .
SPECTROSCOPY AND SPECTRAL ANALYSIS, 2023, 43 (02) :590-596
[5]   SEDRFuse: A Symmetric Encoder-Decoder With Residual Block Network for Infrared and Visible Image Fusion [J].
Jian, Lihua ;
Yang, Xiaomin ;
Liu, Zheng ;
Jeon, Gwanggil ;
Gao, Mingliang ;
Chisholm, David .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
[6]   RFN-Nest: An end-to-end residual fusion network for infrared and visible images [J].
Li, Hui ;
Wu, Xiao-Jun ;
Kittler, Josef .
INFORMATION FUSION, 2021, 73 :72-86
[7]   MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion [J].
Li, Hui ;
Wu, Xiao-Jun ;
Kittler, Josef .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4733-4746
[8]   DenseFuse: A Fusion Approach to Infrared and Visible Images [J].
Li, Hui ;
Wu, Xiao-Jun .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2019, 28 (05) :2614-2623
[9]   AttentionFGAN: Infrared and Visible Image Fusion Using Attention-Based Generative Adversarial Networks [J].
Li, Jing ;
Huo, Hongtao ;
Li, Chang ;
Wang, Renhua ;
Feng, Qi .
IEEE TRANSACTIONS ON MULTIMEDIA, 2021, 23 :1383-1396
[10]   Performance comparison of different multi-resolution transforms for image fusion [J].
Li, Shutao ;
Yang, Bin ;
Hu, Jianwen .
INFORMATION FUSION, 2011, 12 (02) :74-84