Infrared and visible image fusion based on a two-stage fusion strategy and feature interaction block

被引:0
|
作者
Chen, Bingxin [1 ,2 ]
Luo, Shaojuan [3 ]
Chen, Meiyun [1 ]
Zhang, Fanlong [1 ,2 ]
He, Chunhua [1 ,2 ]
Wu, Heng [1 ,2 ]
机构
[1] Guangdong Univ Technol, Sch Automat, Guangdong Prov Key Lab Cyber Phys Syst, Guangzhou 510006, Peoples R China
[2] Guangdong Univ Technol, Sch Comp, Guangzhou 510006, Peoples R China
[3] Guangdong Univ Technol, Sch Chem Engn & Light Ind, Guangzhou 510006, Peoples R China
基金
中国国家自然科学基金;
关键词
Image fusion; Neural network; Optical imaging; Image processing;
D O I
10.1016/j.optlaseng.2024.108461
中图分类号
O43 [光学];
学科分类号
070207 ; 0803 ;
摘要
Infrared and visible image fusion technology (IVIFT) can combine the advantages of infrared and visible imaging systems and reduce the influence of particular environments, such as snow, darkness, fog, etc. Therefore, IVIFT is widely applied in security inspection, night monitoring, and remote sensing. However, many existing methods utilize a single-stage approach to optimize the model, which often causes weak robustness and an imbalance between the intensity and detail information. To solve this issue, we propose an infrared and visible image fusion method based on a two-stage fusion strategy and a feature interaction block (TFfusion). Specifically, a two-stage fusion strategy is developed to balance the salient target and the texture information retaining. The texture information is fused in Stage I, the salient targets are fused in Stage II, and Stage I guides Stage II in extracting texture information. A feature interaction block is designed to enhance the correlation between the source images and the fused image by sharing the features with each other. Quantitative and qualitative experiment results demonstrate that TFfusion achieves competitive performance and strong robustness in fusing the infrared and visible images compared with other advanced fusion methods.
引用
收藏
页数:24
相关论文
共 50 条
  • [31] Multiscale feature learning and attention mechanism for infrared and visible image fusion
    Li Gao
    DeLin Luo
    Song Wang
    Science China Technological Sciences, 2024, 67 : 408 - 422
  • [32] HFHFusion: A Heterogeneous Feature Highlighted method for infrared and visible image fusion
    Zheng, Yulong
    Zhao, Yan
    Chen, Jian
    Chen, Mo
    Yu, Jiaqi
    Wei, Jian
    Wang, Shigang
    OPTICS COMMUNICATIONS, 2024, 571
  • [33] Feature dynamic alignment and refinement for infrared-visible image fusion: Translation robust fusion
    Li, Huafeng
    Zhao, Junzhi
    Li, Jinxing
    Yu, Zhengtao
    Lu, Guangming
    INFORMATION FUSION, 2023, 95 : 26 - 41
  • [34] Unsupervised Infrared Image and Visible Image Fusion Algorithm Based on Deep Learning
    Chen Guoyang
    Wu Xiaojun
    Xu Tianyang
    LASER & OPTOELECTRONICS PROGRESS, 2022, 59 (04)
  • [35] Infrared and Visible Image Fusion Based on Tetrolet Transform
    Zhou, Xin
    Wang, Wei
    PROCEEDINGS OF THE 2015 INTERNATIONAL CONFERENCE ON COMMUNICATIONS, SIGNAL PROCESSING, AND SYSTEMS, 2016, 386 : 701 - 708
  • [36] Attribute filter based infrared and visible image fusion
    Mo, Yan
    Kang, Xudong
    Duan, Puhong
    Sun, Bin
    Li, Shutao
    INFORMATION FUSION, 2021, 75 : 41 - 54
  • [37] An Information Retention and Feature Transmission Network for Infrared and Visible Image Fusion
    Liu, Chang
    Yang, Bin
    Li, Yuehua
    Zhang, Xiaozhi
    Pang, Lihui
    IEEE SENSORS JOURNAL, 2021, 21 (13) : 14950 - 14959
  • [38] Infrared and Visible Image Fusion Based on Semantic Segmentation
    Zhou H.
    Hou J.
    Wu W.
    Zhang Y.
    Wu Y.
    Ma J.
    Jisuanji Yanjiu yu Fazhan/Computer Research and Development, 2021, 58 (02): : 436 - 443
  • [39] Infrared and Visible Image Fusion Based on Mask and Cross-Dynamic Fusion
    Fu, Qiang
    Fu, Hanxiang
    Wu, Yuezhou
    ELECTRONICS, 2023, 12 (20)
  • [40] Visible and Infrared Image Fusion Based on Curvelet Transform
    Quan, Siji
    Qian, Weiping
    Guo, Junhai
    Zhao, Hua
    2014 2ND INTERNATIONAL CONFERENCE ON SYSTEMS AND INFORMATICS (ICSAI), 2014, : 828 - 832