Rethinking the necessity of image fusion in high-level vision tasks: A practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity

被引:104
作者
Tang, Linfeng [1 ]
Zhang, Hao [1 ]
Xu, Han [1 ]
Ma, Jiayi [1 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
关键词
Image fusion; High-level vision task; Progressive semantic injection; Scene fidelity; Feature-level fusion; MULTISCALE TRANSFORM; FRAMEWORK;
D O I
10.1016/j.inffus.2023.101870
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image fusion aims to integrate complementary characteristics of source images into a single fused image that better serves human visual observation and machine vision perception. However, most existing image fusion algorithms primarily focus on improving the visual appeal of fused images. Although there are some semantic-driven methods that consider semantic requirements of downstream applications, none of them have demonstrated the potential of image-level fusion compared to feature-level fusion, which fulfills high-level vision tasks directly on multi-modal features rather than on a fused image. To overcome these limitations, this paper presents a practical infrared and visible image fusion network based on progressive semantic injection and scene fidelity constraints, termed PSFusion. First of all, the sparse semantic perception branch extracts sufficient semantic features, which are then progressively integrated into the fusion network using the semantic injection module to fulfill the semantic requirements of high-level vision tasks. The scene fidelity path within the scene restoration branch is devised to ensure that the fusion features contain complete information for reconstructing the source images. Additionally, the contrast mask and salient target mask are employed to construct the fusion loss to maintain impressive visual effects of fusion results. In particular, we provide quantitative and qualitative analyses to demonstrate the potential of image-level fusion compared to feature-level fusion for high-level vision tasks. With the rapid advancement of large-scale models, image-level fusion can expeditiously leverage the advantages of multi-modal data and state-of-the-art (SOTA) unimodal segmentation to achieve superior performance. Furthermore, extensive comparative experiments demonstrate the superiority of our PSFusion over SOTA image-level fusion alternatives in terms of visual appeal and high-level semantics. Even under harsh circumstances, our method offers satisfactory fusion results to serve subsequent high-level vision applications. The source code is available at https://github.com/Linfeng-Tang/ PSFusion.
引用
收藏
页数:16
相关论文
共 86 条
  • [1] A new image quality metric for image fusion: The sum of the correlations of differences
    Aslantas, V.
    Bendes, E.
    [J]. AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS, 2015, 69 (12) : 160 - 166
  • [2] SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
    Badrinarayanan, Vijay
    Kendall, Alex
    Cipolla, Roberto
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) : 2481 - 2495
  • [3] Locality guided cross-modal feature aggregation and pixel-level fusion for multispectral pedestrian detection
    Cao, Yanpeng
    Luo, Xing
    Yang, Jiangxin
    Cao, Yanlong
    Yang, Michael Ying
    [J]. INFORMATION FUSION, 2022, 88 : 1 - 11
  • [4] AFT: Adaptive Fusion Transformer for Visible and Infrared Images
    Chang, Zhihao
    Feng, Zhixi
    Yang, Shuyuan
    Gao, Quanwei
    [J]. IEEE TRANSACTIONS ON IMAGE PROCESSING, 2023, 32 : 2077 - 2092
  • [5] Pre-Trained Image Processing Transformer
    Chen, Hanting
    Wang, Yunhe
    Guo, Tianyu
    Xu, Chang
    Deng, Yiping
    Liu, Zhenhua
    Ma, Siwei
    Xu, Chunjing
    Xu, Chao
    Gao, Wen
    [J]. 2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION, CVPR 2021, 2021, : 12294 - 12305
  • [6] Infrared and visible image fusion based on target-enhanced multiscale transform decomposition
    Chen, Jun
    Li, Xuejiao
    Luo, Linbo
    Mei, Xiaoguang
    Ma, Jiayi
    [J]. INFORMATION SCIENCES, 2020, 508 (508) : 64 - 78
  • [7] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
    Chen, Liang-Chieh
    Zhu, Yukun
    Papandreou, George
    Schroff, Florian
    Adam, Hartwig
    [J]. COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 833 - 851
  • [8] DeepLab: Semantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs
    Chen, Liang-Chieh
    Papandreou, George
    Kokkinos, Iasonas
    Murphy, Kevin
    Yuille, Alan L.
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2018, 40 (04) : 834 - 848
  • [9] Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition
    Cui, Guangmang
    Feng, Huajun
    Xu, Zhihai
    Li, Qi
    Chen, Yueting
    [J]. OPTICS COMMUNICATIONS, 2015, 341 : 199 - 209
  • [10] FEANet: Feature-Enhanced Attention Network for RGB-Thermal Real-time Semantic Segmentation
    Deng, Fuqin
    Feng, Hua
    Liang, Mingjian
    Wang, Hongmin
    Yang, Yong
    Gao, Yuan
    Chen, Junfeng
    Hu, Junjie
    Guo, Xiyue
    Lam, Tin Lun
    [J]. 2021 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS (IROS), 2021, : 4467 - 4473