Infrared and visible image fusion via parallel scene and texture learning

被引:44
作者
Xu, Meilong [1 ]
Tang, Linfeng [1 ]
Zhang, Hao [1 ]
Ma, Jiayi [1 ]
机构
[1] Wuhan Univ, Elect Informat Sch, Wuhan 430072, Peoples R China
关键词
Image fusion; Infrared; Scene and texture learning; Recurrent neural network;
D O I
10.1016/j.patcog.2022.108929
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image fusion plays a pivotal role in numerous high-level computer vision tasks. Existing deep learning -based image fusion methods usually leverage an implicit manner to achieve feature extraction, which would cause some characteristics of source images, e.g., contrast and structural information, are unable to be fully extracted and integrated into the fused images. In this work, we propose an infrared and visible image fusion method via parallel scene and texture learning. Our key objective is to deploy two branches of deep neural networks, namely the content branch and detail branch, to synchronously extract different characteristics from source images and then reconstruct the fused image. The content branch fo-cuses primarily on coarse-grained information and is deployed to estimate the global content of source images. The detail branch primarily pays attention to fine-grained information, and we design an omni-directional spatially variant recurrent neural networks in this branch to model the internal structure of source images more accurately and extract texture-related features in an explicit manner. Extensive ex-periments show that our approach achieves significant improvements over state-of-the-arts on qualita-tive and quantitative evaluations with comparatively less running time consumption. Meanwhile, we also demonstrate the superiority of our fused results in the object detection task. Our code is available at: https://github.com/Melon-Xu/PSTLFusion .(c) 2022 Elsevier Ltd. All rights reserved.
引用
收藏
页数:14
相关论文
共 47 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]   A new image quality metric for image fusion: The sum of the correlations of differences [J].
Aslantas, V. ;
Bendes, E. .
AEU-INTERNATIONAL JOURNAL OF ELECTRONICS AND COMMUNICATIONS, 2015, 69 (12) :160-166
[3]   Attention to Scale: Scale-aware Semantic Image Segmentation [J].
Chen, Liang-Chieh ;
Yang, Yi ;
Wang, Jiang ;
Xu, Wei ;
Yuille, Alan L. .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :3640-3649
[4]  
Cho KYHY, 2014, Arxiv, DOI [arXiv:1406.1078, DOI 10.48550/ARXIV.1406.1078]
[5]   Image quality measures and their performance [J].
Eskicioglu, AM ;
Fisher, PS .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1995, 43 (12) :2959-2965
[6]  
Ha Q, 2017, IEEE INT C INT ROBOT, P5108, DOI 10.1109/IROS.2017.8206396
[7]   A new image fusion performance metric based on visual information fidelity [J].
Han, Yu ;
Cai, Yunze ;
Cao, Yin ;
Xu, Xiaoming .
INFORMATION FUSION, 2013, 14 (02) :127-135
[8]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[9]   NestFuse: An Infrared and Visible Image Fusion Architecture Based on Nest Connection and Spatial/Channel Attention Models [J].
Li, Hui ;
Wu, Xiao-Jun ;
Durrani, Tariq .
IEEE TRANSACTIONS ON INSTRUMENTATION AND MEASUREMENT, 2020, 69 (12) :9645-9656
[10]   MDLatLRR: A Novel Decomposition Method for Infrared and Visible Image Fusion [J].
Li, Hui ;
Wu, Xiao-Jun ;
Kittler, Josef .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2020, 29 :4733-4746