PET and MRI image fusion based on a dense convolutional network with dual attention

被引:7
作者
Li, Bicao [1 ,3 ,4 ]
Hwang, Jenq-Neng [2 ]
Liu, Zhoufeng [1 ]
Li, Chunlei [1 ]
Wang, Zongmin [3 ,4 ]
机构
[1] Zhongyuan Univ Technol, Sch Elect & Informat Engn, Zhengzhou 450007, Peoples R China
[2] Univ Washington, Dept Elect Engn, Seattle, WA 98195 USA
[3] Zhengzhou Univ, Sch Informat Engn, Zhengzhou 450001, Peoples R China
[4] Zhengzhou Univ, Cooperat Innovat Ctr Internet Healthcare, Zhengzhou 450000, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Channel attention; Densely connected network; Image fusion; Spatial attention; PET and MRI images; GENERATIVE ADVERSARIAL NETWORK; QUALITY ASSESSMENT; FRAMEWORK;
D O I
10.1016/j.compbiomed.2022.106339
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
The fusion techniques of different modalities in medical images, e.g., Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI), are increasingly significant in many clinical applications by integrating the complementary information from different medical images. In this paper, we propose a novel fusion model based on a dense convolutional network with dual attention (CSpA-DN) for PET and MRI images. In our framework, an encoder composed of the densely connected neural network is constructed to extract features from source images, and a decoder network is employed to generate the fused image from these features. Simultaneously, a dual-attention module is introduced in the encoder and decoder to further integrate local features along with their global dependencies adaptively. In the dual-attention module, a spatial attention block is leveraged to extract features of each point from encoder network by a weighted sum of feature information at all positions. Meanwhile, the interdependent correlation of all image features is aggregated via a module of channel attention. In addition, we design a specific loss function including image loss, structural loss, gradient loss and perception loss to preserve more structural and detail information and sharpen the edges of targets. Our approach facilitates the fused images to not only preserve abundant functional information from PET images but also retain rich detail structures of MRI images. Experimental results on publicly available datasets illustrate the superiorities of CSpA-DN model compared with state-of-the-art methods according to both qualitative observation and objective assessment.
引用
收藏
页数:20
相关论文
共 81 条
[1]   Fast curvelet transform through genetic algorithm for multimodal medical image fusion [J].
Arif, Muhammad ;
Wang, Guojun .
SOFT COMPUTING, 2020, 24 (03) :1815-1836
[2]  
Ashwanth B, 2020, PR INT CONF DEVICE C, P303, DOI [10.1109/icdcs48716.2020.243604, 10.1109/ICDCS48716.2020.243604]
[3]   Directive Contrast Based Multimodal Medical Image Fusion in NSCT Domain [J].
Bhatnagar, Gaurav ;
Wu, Q. M. Jonathan ;
Liu, Zheng .
IEEE TRANSACTIONS ON MULTIMEDIA, 2013, 15 (05) :1014-1024
[4]   THE LAPLACIAN PYRAMID AS A COMPACT IMAGE CODE [J].
BURT, PJ ;
ADELSON, EH .
IEEE TRANSACTIONS ON COMMUNICATIONS, 1983, 31 (04) :532-540
[5]   MRI and PET image fusion by combining IHS and retina-inspired models [J].
Daneshvar, Sabalan ;
Ghassemian, Hassan .
INFORMATION FUSION, 2010, 11 (02) :114-123
[6]   An overview of multi-modal medical image fusion [J].
Du, Jiao ;
Li, Weisheng ;
Lu, Ke ;
Xiao, Bin .
NEUROCOMPUTING, 2016, 215 :3-20
[7]   Union Laplacian pyramid with multiple features for medical image fusion [J].
Du, Jiao ;
Li, Weisheng ;
Xiao, Bin ;
Nawaz, Qamar .
NEUROCOMPUTING, 2016, 194 :326-339
[8]   Dual Attention Network for Scene Segmentation [J].
Fu, Jun ;
Liu, Jing ;
Tian, Haijie ;
Li, Yong ;
Bao, Yongjun ;
Fang, Zhiwei ;
Lu, Hanqing .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3141-3149
[9]   Multi-focus image fusion based on non-subsampled shearlet transform [J].
Gao Guorong ;
Xu Luping ;
Feng Dongzhu .
IET IMAGE PROCESSING, 2013, 7 (06) :633-639
[10]  
Haghighat M, 2014, I C APPL INF COMM TE, P424