Infrared and Visible Image Fusion via Decoupling Network

被引:9
|
作者
Wang, Xue [1 ]
Guan, Zheng [1 ]
Yu, Shishuang [1 ]
Cao, Jinde [2 ]
Li, Ya [1 ]
机构
[1] Yunnan Univ, Sch Informat Sci & Engn, Kunming 650500, Peoples R China
[2] Southeast Univ, Sch Math, Nanjing 210096, Peoples R China
基金
中国国家自然科学基金;
关键词
Force; Lead; Loss measurement; Hybrid power systems; Image fusion; Information exchange; Optimization; Decoupling network; hybrid loss; image fusion; information exchange; saliency map; INFORMATION; ATTENTION; FRAMEWORK; MODEL; NEST;
D O I
10.1109/TIM.2022.3216413
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
In general, the goal of the existing infrared and visible image fusion (IVIF) methods is to make the fused image contain both the high-contrast regions of the infrared image and the texture details of the visible image. However, this definition would lead the fusion image losing information from the visible image in high-contrast areas. For this problem, this article proposed a decoupling network-based IVIF method (DNFusion), which utilizes the decoupled maps to design additional constraints on the network to force the network to retain the saliency information of the source image effectively. The current definition of image fusion is satisfied while effectively maintaining the saliency objective of the source images. Specifically, the feature interaction module (FIM) inside effectively facilitates the information exchange within the encoder and improves the utilization of complementary information. Also, a hybrid loss function constructed with weight fidelity loss, gradient loss, and decoupling loss ensures the fusion image to be generated to effectively preserve the source image's texture details and luminance information. The qualitative and quantitative comparison of extensive experiments demonstrates that our model can generate a fused image containing saliency objects and clear details of the source images, and the method we proposed has a better performance than other state-of-the-art (SOTA) methods.
引用
收藏
页数:13
相关论文
共 50 条
  • [1] BDPartNet: Feature Decoupling and Reconstruction Fusion Network for Infrared and Visible Image
    Wang, Xuejie
    Zhang, Jianxun
    Tao, Ye
    Yuan, Xiaoli
    Guo, Yifan
    CMC-COMPUTERS MATERIALS & CONTINUA, 2024, 79 (03): : 4621 - 4639
  • [2] Infrared and Visible Image Fusion via Multiscale Receptive Field Amplification Fusion Network
    Ji, Chuanming
    Zhou, Wujie
    Lei, Jingsheng
    Ye, Lv
    IEEE SIGNAL PROCESSING LETTERS, 2023, 30 : 493 - 497
  • [3] Infrared and Visible Image Fusion via Texture Conditional Generative Adversarial Network
    Yang, Yong
    Liu, Jiaxiang
    Huang, Shuying
    Wan, Weiguo
    Wen, Wenying
    Guan, Juwei
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2021, 31 (12) : 4771 - 4783
  • [4] MSFNet: MultiStage Fusion Network for infrared and visible image fusion
    Wang, Chenwu
    Wu, Junsheng
    Zhu, Zhixiang
    Chen, Hao
    NEUROCOMPUTING, 2022, 507 : 26 - 39
  • [5] MCnet: Multiscale visible image and infrared image fusion network
    Sun, Le
    Li, Yuhang
    Zheng, Min
    Zhong, Zhaoyi
    Zhang, Yanchun
    SIGNAL PROCESSING, 2023, 208
  • [6] An Efficient Network Model for Visible and Infrared Image Fusion
    Pan, Zhu
    Ouyang, Wanqi
    IEEE ACCESS, 2023, 11 : 86413 - 86430
  • [7] Multigrained Attention Network for Infrared and Visible Image Fusion
    Li, Jing
    Huo, Hongtao
    Li, Chang
    Wang, Renhua
    Sui, Chenhong
    Liu, Zhao
    IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2021, 70
  • [8] CFNet: An infrared and visible image compression fusion network
    Xing, Mengliang
    Liu, Gang
    Tang, Haojie
    Qian, Yao
    Zhang, Jun
    PATTERN RECOGNITION, 2024, 156
  • [9] SimpliFusion: a simplified infrared and visible image fusion network
    Liu, Yong
    Li, Xingyuan
    Liu, Yong
    Zhong, Wei
    VISUAL COMPUTER, 2025, 41 (02): : 1335 - 1350
  • [10] Infrared and visible image fusion via gradientlet filter
    Ma, Jiayi
    Zhou, Yi
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2020, 197