Multimodal image fusion via coupled feature learning

被引:44
作者
Veshki, Farshad G. [1 ]
Ouzir, Nora [2 ]
Vorobyov, Sergiy A. [1 ]
Ollila, Esa [1 ]
机构
[1] Aalto Univ, Dept Signal Proc & Acoust, Espoo, Finland
[2] Univ Paris Saclay Inria, CVN, Cent Supelec, Paris, France
关键词
Multimodal image fusion; Coupled dictionary learning; Joint sparse representation; Multimodal medical imaging; Infrared images; QUALITY ASSESSMENT; K-SVD; DICTIONARIES; ALGORITHMS;
D O I
10.1016/j.sigpro.2022.108637
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
This paper presents a multimodal image fusion method using a novel decomposition model based on coupled dictionary learning. The proposed method is general and can be used for a variety of imaging modalities. In particular, the images to be fused are decomposed into correlated and uncorrelated components using sparse representations with identical supports and a Pearson correlation constraint, respectively. The resulting optimization problem is solved by an alternating minimization algorithm. Contrary to other learning-based fusion methods, the proposed approach does not require any training data, and the correlated features are extracted online from the data itself. By preserving the uncorrelated components in the fused images, the proposed fusion method significantly improves on current fusion approaches in terms of maintaining the texture details and modality-specific information. The maximum-absolute-value rule is used for the fusion of correlated components only. This leads to an enhanced contrast-resolution without causing intensity attenuation or loss of important information. Experimental results show that the proposed method achieves superior performance in terms of both visual and objective evaluations compared to state-of-the-art image fusion methods. (C) 2022 The Author(s). Published by Elsevier B.V.
引用
收藏
页数:12
相关论文
共 50 条
[21]   Image Deblurring with Coupled Dictionary Learning [J].
Xiang, Shiming ;
Meng, Gaofeng ;
Wang, Ying ;
Pan, Chunhong ;
Zhang, Changshui .
INTERNATIONAL JOURNAL OF COMPUTER VISION, 2015, 114 (2-3) :248-271
[22]   Convolution analysis operator for multimodal image fusion [J].
Zhang, Chengfang .
PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE OF INFORMATION AND COMMUNICATION TECHNOLOGY, 2021, 183 :603-608
[23]   Person Reidentification via Multi-Feature Fusion With Adaptive Graph Learning [J].
Zhou, Runwu ;
Chang, Xiaojun ;
Shi, Lei ;
Shen, Yi-Dong ;
Yang, Yi ;
Nie, Feiping .
IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2020, 31 (05) :1592-1601
[24]   Multimodal feature fusion and ensemble learning for non-intrusive occupancy monitoring using smart meters [J].
Mahmud, Sakib ;
Bensaali, Faycal ;
Chowdhury, Muhammad E. H. ;
Houchati, Mahdi .
BUILDING AND ENVIRONMENT, 2025, 271
[25]   Transformer Based Conditional GAN for Multimodal Image Fusion [J].
Zhang, Jun ;
Jiao, Licheng ;
Ma, Wenping ;
Liu, Fang ;
Liu, Xu ;
Li, Lingling ;
Chen, Puhua ;
Yang, Shuyuan .
IEEE TRANSACTIONS ON MULTIMEDIA, 2023, 25 :8988-9001
[26]   Perceptual objective evaluation for multimodal medical image fusion [J].
Tian, Chuangeng ;
Zhang, Juyuan ;
Tang, Lu .
FRONTIERS IN PHYSICS, 2025, 13
[27]   Detail-enhanced multimodal medical image fusion [J].
Yang, Guocheng ;
Chen, Leiting ;
Qiu, Hang .
2014 IEEE 17TH INTERNATIONAL CONFERENCE ON COMPUTATIONAL SCIENCE AND ENGINEERING (CSE), 2014, :1611-1615
[28]   Multimodal image fusion with adaptive joint sparsity model [J].
Zhang, Chengfang ;
Yi, Liangzhong ;
Feng, Ziliang ;
Gao, Zhisheng ;
Jin, Xin ;
Yan, Dan .
JOURNAL OF ELECTRONIC IMAGING, 2019, 28 (01)
[29]   Low-light image enhancement via coupled dictionary learning and extreme learning machine [J].
Zhang, Jie ;
Zhou, Pucheng ;
Xue, Mogen .
2018 INTERNATIONAL CONFERENCE ON IMAGE AND VIDEO PROCESSING, AND ARTIFICIAL INTELLIGENCE, 2018, 10836
[30]   LRFE-CL: A self-supervised fusion network for infrared and visible image via low redundancy feature extraction and contrastive learning [J].
Li, Jintao ;
Nie, Rencan ;
Cao, Jinde ;
Xie, Guangxu ;
Ding, Zhengze .
EXPERT SYSTEMS WITH APPLICATIONS, 2024, 251