Multimodal medical image fusion based on nonsubsampled shearlet transform and convolutional sparse representation

被引:19
作者
Wang, Lifang [1 ]
Dou, Jieliang [1 ]
Qin, Pinle [1 ]
Lin, Suzhen [1 ]
Gao, Yuan [1 ]
Wang, Ruifang [1 ]
Zhang, Jin [1 ]
机构
[1] North Univ China, Sch Data Sci & Technol, Shanxi Key Lab Biomed Imaging & Imaging Big Data, Taiyuan, Shanxi, Peoples R China
关键词
Multimodal medical image fusion; Convolutional sparse representation; Nonsubsampled shearlet transform; Regional energy; Improved space frequency;
D O I
10.1007/s11042-021-11379-w
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Multimodal medical image fusion technology can assist doctors diagnose diseases accurately and efficiently. However the multi-scale decomposition based image fusion methods exhibit low contrast and energy loss. And the sparse representation based fusion methods exist weak expression ability caused by the single dictionary and the spatial inconsistency. To solve these problems, this paper proposes a novel multimodal medical image fusion method based on nonsubsampled shearlet transform (NSST) and convolutional sparse representation (CSR). First, the registered source images are decomposed into multi-scale and multi-direction sub-images, and then these sub-images are trained respectively to obtain different sub-dictionaries by the alternating direction product method. Second, different scale sub-images are encoded by the convolutional sparse representation to get the sparse coefficients of the low frequency and the high frequency, respectively. Third, the coefficients of the low frequency are fused by the regional energy and the average L-1 norm. Meanwhile the coefficients of the high frequency are fused by the improved spatial frequency and the average l(1) norm. Finally, the final fused image is reconstructed by inverse NSST. Experimental results on serials of multimodal brain images including CT,MRT2,PET and SPECT demonstrate that the proposed method has the state-of-the-art performance compared with other current popular medical fusion methods whatever in objective and subjective assessment.
引用
收藏
页码:36401 / 36421
页数:21
相关论文
共 32 条
[1]   K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation [J].
Aharon, Michal ;
Elad, Michael ;
Bruckstein, Alfred .
IEEE TRANSACTIONS ON SIGNAL PROCESSING, 2006, 54 (11) :4311-4322
[2]  
[Anonymous], 2017, CONVOLUTIONAL SPARSE
[3]   Subjective and Objective Evaluation of Noisy Multimodal Medical Image Fusion Using 2D-DTCWT and 2D-SMCWT [J].
Bengueddoudj, Abdallah ;
Messali, Zoubeida .
ADVANCES IN COMPUTING SYSTEMS AND APPLICATIONS, 2019, 50 :225-234
[4]   Fast Convolutional Sparse Coding [J].
Bristow, Hilton ;
Eriksson, Anders ;
Lucey, Simon .
2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2013, :391-398
[5]   Nonsubsampled rotated complex wavelet transform (NSRCxWT) for medical image fusion related to clinical aspects in neurocysticercosis [J].
Chavan, Satishkumar S. ;
Mahajan, Abhishek ;
Talbar, Sanjay N. ;
Desai, Subhash ;
Thakur, Meenakshi ;
D'cruz, Anil .
COMPUTERS IN BIOLOGY AND MEDICINE, 2017, 81 :64-78
[6]   A Neuro-Fuzzy Approach for Medical Image Fusion [J].
Das, Sudeb ;
Kundu, Malay Kumar .
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, 2013, 60 (12) :3347-3353
[7]   Multimodality medical image fusion based on new features in NSST domain [J].
Ganasala P. ;
Kumar V. .
Biomedical Engineering Letters, 2014, 4 (04) :414-424
[8]  
Heide F, 2015, PROC CVPR IEEE, P5135, DOI 10.1109/CVPR.2015.7299149
[9]  
Izonin Ivan, 2015, 2015 Xth International Scientific and Technical Conference - Computer Sciences and Information Technologies (CSIT). Proceedings, P25, DOI 10.1109/STC-CSIT.2015.7325423
[10]   Joint patch clustering-based dictionary learning for multimodal image fusion [J].
Kim, Minjae ;
Han, David K. ;
Ko, Hanseok .
INFORMATION FUSION, 2016, 27 :198-214