Infrared and visible image fusion based on residual dense network and gradient loss

被引:18
|
作者
Li, Jiawei [1 ]
Liu, Jinyuan [2 ]
Zhou, Shihua [1 ]
Zhang, Qiang [1 ,3 ]
Kasabov, Nikola K. [4 ,5 ]
机构
[1] Dalian Univ, Sch Software Engn, Key Lab Adv Design & Intelligent Comp, Minist Educ, Dalian, Peoples R China
[2] Dalian Univ Technol, Sch Mech Engn, Dalian 116024, Peoples R China
[3] Dalian Univ Technol, Sch Comp Sci & Technol, Dalian 116024, Peoples R China
[4] Auckland Univ Technol, Knowledge Engn & Discovery Res Inst, Auckland 1010, New Zealand
[5] Ulster Univ, Intelligent Syst Res Ctr, Londonderry BT52 1SA, North Ireland
基金
中国国家自然科学基金;
关键词
Image fusion; Unsupervised learning; End-to-end model; Infrared image; Visible image; MULTI-FOCUS; TRANSFORM;
D O I
10.1016/j.infrared.2022.104486
中图分类号
TH7 [仪器、仪表];
学科分类号
0804 ; 080401 ; 081102 ;
摘要
Deep learning has made great progress in the field of image fusion. Compared with traditional methods, the image fusion approach based on deep learning requires no cumbersome matrix operations. In this paper, an end-to-end model for the infrared and visible image fusion is proposed. This unsupervised learning network architecture do not employ fusion strategy. In the stage of feature extraction, residual dense blocks are used to generate a fusion image, which preserves the information of source images to the greatest extent. In the model of feature reconstruction, shallow feature maps, residual dense information, and deep feature maps are merged in order to build a fused result. Gradient loss that we proposed for the network can cooperate well with special weight blocks extracted from input images to more clearly express texture details in fused images. In the training phase, we select 20 source image pairs with obvious characteristics from the TNO dataset, and expand them by random tailoring to serve as the training dataset of the network. Subjective qualitative and objective quantitative results show that the proposed model has advantages over state-of-the-art methods in the tasks of infrared and visible image fusion. We also use the RoadScene dataset to do ablation experiments to verify the effectiveness of the proposed network for infrared and visible image fusion.
引用
收藏
页数:11
相关论文
共 50 条
  • [31] Infrared and visible image fusion based on multi-scale dense attention connection network
    Chen Y.
    Zhang J.
    Wang Z.
    Guangxue Jingmi Gongcheng/Optics and Precision Engineering, 2022, 30 (18): : 2253 - 2266
  • [32] MAFusion: Multiscale Attention Network for Infrared and Visible Image Fusion
    Li, Xiaoling
    Chen, Houjin
    Li, Yanfeng
    Peng, Yahui
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2022, 71
  • [33] IDFusion: An Infrared and Visible Image Fusion Network for Illuminating Darkness
    Lv, Guohua
    Wang, Xiyan
    Wei, Zhonghe
    Cheng, Jinyong
    Ma, Guangxiao
    Bao, Hanju
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 3140 - 3145
  • [34] Infrared and Visible Image Fusion Techniques Based on Deep Learning: A Review
    Sun, Changqi
    Zhang, Cong
    Xiong, Naixue
    ELECTRONICS, 2020, 9 (12) : 1 - 24
  • [35] Infrared and Visible Image Fusion Based on Tetrolet Transform
    Zhou, Xin
    Wang, Wei
    PROCEEDINGS OF THE 2015 INTERNATIONAL CONFERENCE ON COMMUNICATIONS, SIGNAL PROCESSING, AND SYSTEMS, 2016, 386 : 701 - 708
  • [36] DDFNet-A: Attention-Based Dual-Branch Feature Decomposition Fusion Network for Infrared and Visible Image Fusion
    Wei, Qiancheng
    Liu, Ying
    Jiang, Xiaoping
    Zhang, Ben
    Su, Qiya
    Yu, Muyao
    REMOTE SENSING, 2024, 16 (10)
  • [37] RADFNet: An infrared and visible image fusion framework based on distributed network
    Feng, Siling
    Wu, Can
    Lin, Cong
    Huang, Mengxing
    FRONTIERS IN PLANT SCIENCE, 2023, 13
  • [38] Infrared and Visible Image Fusion Based on Semantic Segmentation
    Zhou H.
    Hou J.
    Wu W.
    Zhang Y.
    Wu Y.
    Ma J.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (02): : 436 - 443
  • [39] Visible and Infrared Image Fusion Based on Curvelet Transform
    Quan, Siji
    Qian, Weiping
    Guo, Junhai
    Zhao, Hua
    2014 2ND INTERNATIONAL CONFERENCE ON SYSTEMS AND INFORMATICS (ICSAI), 2014, : 828 - 832
  • [40] Infrared and Visible Image Fusion Based on Improved Dual Path Generation Adversarial Network
    Yang, Shen
    Tian, Lifan
    Liang, Jiaming
    Huang, Zefeng
    JOURNAL OF ELECTRONICS & INFORMATION TECHNOLOGY, 2023, 45 (08) : 3012 - 3021