Unsupervised end-to-end infrared and visible image fusion network using learnable fusion strategy

被引:0
作者
Chen, Yili [1 ,2 ]
Wan, Minjie [1 ,2 ]
Xu, Yunkai [1 ,2 ]
Cao, Xiqing [3 ,4 ]
Zhang, Xiaojie [3 ,4 ]
Chen, Qian [1 ,2 ]
Gu, Gouhua [1 ,2 ]
机构
[1] Nanjing Univ Sci & Technol, Sch Elect & Opt Engn, Nanjing 210094, Peoples R China
[2] Nanjing Univ Sci & Technol, Jiangsu Key Lab Spectral Imaging & Intelligent Sen, Nanjing 210094, Peoples R China
[3] Shanghai Aerosp Control Technol Inst, Shanghai 201109, Peoples R China
[4] Infrared Detect Technol Res & Dev Ctr, Shanghai 201109, Peoples R China
基金
中国国家自然科学基金;
关键词
QUALITY ASSESSMENT; PERFORMANCE; FRAMEWORK; DEEP; NEST;
D O I
10.1364/JOSAA.473908
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Infrared and visible image fusion aims to reconstruct fused images with comprehensive visual information by merging the complementary features of source images captured by different imaging sensors. This tech-nology has been widely used in civil and military fields, such as urban security monitoring, remote sensing measurement, and battlefield reconnaissance. However, the existing methods still suffer from the preset fusion strategies that cannot be adjustable to different fusion demands and the loss of information during the fea -ture propagation process, thereby leading to the poor generalization ability and limited fusion performance. Therefore, we propose an unsupervised end-to-end network with learnable fusion strategy for infrared and visible image fusion in this paper. The presented network mainly consists of three parts, including the fea -ture extraction module, the fusion strategy module, and the image reconstruction module. First, in order to preserve more information during the process of feature propagation, dense connections and residual con-nections are applied to the feature extraction module and the image reconstruction module, respectively. Second, a new convolutional neural network is designed to adaptively learn the fusion strategy, which is able to enhance the generalization ability of our algorithm. Third, due to the lack of ground truth in fusion tasks, a loss function that consists of saliency loss and detail loss is exploited to guide the training direction and bal-ance the retention of different types of information. Finally, the experimental results verify that the proposed algorithm delivers competitive performance when compared with several state-of-the-art algorithms in terms of both subjective and objective evaluations. Our codes are available at https://github.com/ MinjieWan/Unsupervised-end-to-end-infrared-and-visible-image-fusion-network-using-learnable-fusion-strategy. (c) 2022 Optica Publishing Group
引用
收藏
页码:2257 / 2270
页数:14
相关论文
共 56 条
  • [21] Microsoft COCO: Common Objects in Context
    Lin, Tsung-Yi
    Maire, Michael
    Belongie, Serge
    Hays, James
    Perona, Pietro
    Ramanan, Deva
    Dollar, Piotr
    Zitnick, C. Lawrence
    [J]. COMPUTER VISION - ECCV 2014, PT V, 2014, 8693 : 740 - 755
  • [22] A general framework for image fusion based on multi-scale transform and sparse representation
    Liu, Yu
    Liu, Shuping
    Wang, Zengfu
    [J]. INFORMATION FUSION, 2015, 24 : 147 - 164
  • [23] Multi-focus image fusion with dense SIFT
    Liu, Yu
    Liu, Shuping
    Wang, Zengfu
    [J]. INFORMATION FUSION, 2015, 23 : 139 - 155
  • [24] Thermal infrared and visible sequences fusion tracking based on a hybrid tracking framework with adaptive weighting scheme
    Luo, Chengwei
    Sun, Bin
    Yang, Ke
    Lu, Taoran
    Yeh, Wei-Chang
    [J]. INFRARED PHYSICS & TECHNOLOGY, 2019, 99 : 265 - 276
  • [25] DDcGAN: A Dual-Discriminator Conditional Generative Adversarial Network for Multi-Resolution Image Fusion
    Ma, Jiayi
    Xu, Han
    Jiang, Junjun
    Mei, Xiaoguang
    Zhang, Xiao-Ping
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 : 4980 - 4995
  • [26] FusionGAN: A generative adversarial network for infrared and visible image fusion
    Ma, Jiayi
    Yu, Wei
    Liang, Pengwei
    Li, Chang
    Jiang, Junjun
    [J]. INFORMATION FUSION, 2019, 48 : 11 - 26
  • [27] Infrared and visible image fusion via gradient transfer and total variation minimization
    Ma, Jiayi
    Chen, Chen
    Li, Chang
    Huang, Jun
    [J]. INFORMATION FUSION, 2016, 31 : 100 - 109
  • [28] Perceptual Quality Assessment for Multi-Exposure Image Fusion
    Ma, Kede
    Zeng, Kai
    Wang, Zhou
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2015, 24 (11) : 3345 - 3356
  • [29] Multi-focus image fusion using dictionary-based sparse representation
    Nejati, Mansour
    Samavi, Shadrokh
    Shirani, Shahram
    [J]. INFORMATION FUSION, 2015, 25 : 72 - 84
  • [30] Remote sensing image fusion using the curvelet transform
    Nencini, Filippo
    Garzelli, Andrea
    Baronti, Stefano
    Alparone, Luciano
    [J]. INFORMATION FUSION, 2007, 8 (02) : 143 - 156