Multimodal image fusion via coupled feature learning

被引:45
作者
Veshki, Farshad G. [1 ]
Ouzir, Nora [2 ]
Vorobyov, Sergiy A. [1 ]
Ollila, Esa [1 ]
机构
[1] Aalto Univ, Dept Signal Proc & Acoust, Espoo, Finland
[2] Univ Paris Saclay Inria, CVN, Cent Supelec, Paris, France
关键词
Multimodal image fusion; Coupled dictionary learning; Joint sparse representation; Multimodal medical imaging; Infrared images; QUALITY ASSESSMENT; K-SVD; DICTIONARIES; ALGORITHMS;
D O I
10.1016/j.sigpro.2022.108637
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper presents a multimodal image fusion method using a novel decomposition model based on coupled dictionary learning. The proposed method is general and can be used for a variety of imaging modalities. In particular, the images to be fused are decomposed into correlated and uncorrelated components using sparse representations with identical supports and a Pearson correlation constraint, respectively. The resulting optimization problem is solved by an alternating minimization algorithm. Contrary to other learning-based fusion methods, the proposed approach does not require any training data, and the correlated features are extracted online from the data itself. By preserving the uncorrelated components in the fused images, the proposed fusion method significantly improves on current fusion approaches in terms of maintaining the texture details and modality-specific information. The maximum-absolute-value rule is used for the fusion of correlated components only. This leads to an enhanced contrast-resolution without causing intensity attenuation or loss of important information. Experimental results show that the proposed method achieves superior performance in terms of both visual and objective evaluations compared to state-of-the-art image fusion methods. (C) 2022 The Author(s). Published by Elsevier B.V.
引用
收藏
页数:12
相关论文
共 50 条
[41]   Image Deblurring with Coupled Dictionary Learning [J].
Shiming Xiang ;
Gaofeng Meng ;
Ying Wang ;
Chunhong Pan ;
Changshui Zhang .
International Journal of Computer Vision, 2015, 114 :248-271
[42]   Multimodal medical image fusion via laplacian pyramid and convolutional neural network reconstruction with local gradient energy strategy [J].
Fu, Jun ;
Li, Weisheng ;
Du, Jiao ;
Xiao, Bin .
COMPUTERS IN BIOLOGY AND MEDICINE, 2020, 126
[43]   Analysis of Image Quality for Image Fusion via Monotonic Correlation [J].
Kaplan, Lance M. ;
Burks, Stephen D. ;
Blum, Rick S. ;
Moore, Richard K. ;
Nguyen, Quang .
IEEE JOURNAL OF SELECTED TOPICS IN SIGNAL PROCESSING, 2009, 3 (02) :222-235
[44]   Learning Big (Image) Data via Coresets for Dictionaries [J].
Feldman, Dan ;
Feigin, Micha ;
Sochen, Nir .
JOURNAL OF MATHEMATICAL IMAGING AND VISION, 2013, 46 (03) :276-291
[45]   Complex-Valued SAR Image Super-Resolution via Subaperture Learning and Fusion [J].
Dong, Ganggang ;
Wang, Yao ;
Liu, Hongwei ;
Liu, Songlin .
IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, 2025, 63
[46]   Unsupervised Multi-Exposure Image Fusion Breaking Exposure Limits via Contrastive Learning [J].
Xu, Han ;
Liang, Haochen ;
Ma, Jiayi .
THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 3, 2023, :3010-3017
[47]   Multimodal image fusion using sparse representation classification in tetrolet domain [J].
Shandoosti, Hamid Reza ;
Mehrabi, Adel .
DIGITAL SIGNAL PROCESSING, 2018, 79 :9-22
[48]   MACTFusion: Lightweight Cross Transformer for Adaptive Multimodal Medical Image Fusion [J].
Xie, Xinyu ;
Zhang, Xiaozhi ;
Tang, Xinglong ;
Zhao, Jiaxi ;
Xiong, Dongping ;
Ouyang, Lijun ;
Yang, Bin ;
Zhou, Hong ;
Ling, Bingo Wing-Kuen ;
Teo, Kok Lay .
IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2025, 29 (05) :3317-3328
[49]   Multiple attention channels aggregated network for multimodal medical image fusion [J].
Huang, Jingxue ;
Tan, Tianshu ;
Li, Xiaosong ;
Ye, Tao ;
Wu, Yanxiong .
MEDICAL PHYSICS, 2025, 52 (04) :2356-2374
[50]   Multimodal image fusion on ECG signals for congestive heart failure classification [J].
Panchal R. ;
Tiwari S. ;
Agarwal S. .
Multimedia Tools and Applications, 2025, 84 (10) :8247-8259