Frequency Integration and Spatial Compensation Network for infrared and visible image fusion

被引:20
作者
Zheng, Naishan [1 ]
Zhou, Man [1 ]
Huang, Jie [1 ]
Zhao, Feng [1 ]
机构
[1] Univ Sci & Technol China, Sch Informat Sci & Technol, Hefei 230027, Peoples R China
关键词
Image fusion; Infrared and visible image; Frequency integration; Spatial compensation; PERFORMANCE; ARCHITECTURE; TRANSFORM; MODEL; NEST;
D O I
10.1016/j.inffus.2024.102359
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Infrared and visible image fusion aims to synthesize a fused image that emphasizes the salient objects while retaining the intricate texture and visual quality from both infrared and visible images. In opposite to the majority of existing deep learning -based fusion approaches, which predominantly focus on spatial information and neglect the valuable frequency information, we propose a novel method that delves into both domains simultaneously to tackle the infrared and visible image fusion task. Specifically, we first analyze the frequency characteristics of the two modality images via Fourier transform, and observe that fusion results with complementary attributes from source images can be effectively attained by directly incorporating their phase components. To this end, we propose a Frequency Integration and Spatial Compensation Network (FISCNet), consisting of two core designs: a frequency integration component and a spatial compensation component. The former integrates prominent objects from the source images while maintaining the visual perception from the visible image in the frequency domain, and the latter improves the detailed texture and emphasizes the salient objects through a meticulous compensation mechanism in the spatial domain. Extensive experiments on various benchmarks demonstrate the superiority of our method over state-of-the-art alternatives in terms of both salience preservation and texture fidelity. Code is available at https://github.com/zheng980629/FISCNet.
引用
收藏
页数:13
相关论文
共 73 条
[1]   Infrared and visible image fusion based on target-enhanced multiscale transform decomposition [J].
Chen, Jun ;
Li, Xuejiao ;
Luo, Linbo ;
Mei, Xiaoguang ;
Ma, Jiayi .
INFORMATION SCIENCES, 2020, 508 :64-78
[2]  
Chi L., 2020, P ADV NEUR INF PROC, V33, P4479, DOI 10.5555/3495724.3496100
[3]   Detail preserved fusion of visible and infrared images using regional saliency extraction and multi-scale image decomposition [J].
Cui, Guangmang ;
Feng, Huajun ;
Xu, Zhihai ;
Li, Qi ;
Chen, Yueting .
OPTICS COMMUNICATIONS, 2015, 341 :199-209
[4]   Region-based multimodal image fusion using ICA bases [J].
Cvejic, Nedeljko ;
Bull, David ;
Canagarajah, Nishan .
IEEE SENSORS JOURNAL, 2007, 7 (5-6) :743-751
[5]   Image quality measures and their performance [J].
Eskicioglu, AM ;
Fisher, PS .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) :2959-2965
[6]   Infrared and visible images fusion based on RPCA and NSCT [J].
Fu, Zhizhong ;
Wang, Xue ;
Xu, Jin ;
Zhou, Ning ;
Zhao, Yufei .
INFRARED PHYSICS & TECHNOLOGY, 2016, 77 :114-123
[7]   Object Classification Using CNN-Based Fusion of Vision and LIDAR in Autonomous Vehicle Environment [J].
Gao, Hongbo ;
Cheng, Bo ;
Wang, Jianqiang ;
Li, Keqiang ;
Zhao, Jianhui ;
Li, Deyi .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2018, 14 (09) :4224-4231
[8]   A new image fusion performance metric based on visual information fidelity [J].
Han, Yu ;
Cai, Yunze ;
Cao, Yin ;
Xu, Xiaoming .
INFORMATION FUSION, 2013, 14 (02) :127-135
[9]   Deep Fourier-Based Exposure Correction Network with Spatial-Frequency Interaction [J].
Huang, Jie ;
Liu, Yajing ;
Zhao, Feng ;
Yan, Keyu ;
Zhang, Jinghao ;
Huang, Yukun ;
Zhou, Man ;
Xiong, Zhiwei .
COMPUTER VISION, ECCV 2022, PT XIX, 2022, 13679 :163-180
[10]   ReCoNet: Recurrent Correction Network for Fast and Efficient Multi-modality Image Fusion [J].
Huang, Zhanbo ;
Liu, Jinyuan ;
Fan, Xin ;
Liu, Risheng ;
Zhong, Wei ;
Luo, Zhongxuan .
COMPUTER VISION - ECCV 2022, PT XVIII, 2022, 13678 :539-555