PET and MRI image fusion based on a dense convolutional network with dual attention

被引:8
作者
Li, Bicao [1 ,3 ,4 ]
Hwang, Jenq-Neng [2 ]
Liu, Zhoufeng [1 ]
Li, Chunlei [1 ]
Wang, Zongmin [3 ,4 ]
机构
[1] Zhongyuan Univ Technol, Sch Elect & Informat Engn, Zhengzhou 450007, Peoples R China
[2] Univ Washington, Dept Elect Engn, Seattle, WA 98195 USA
[3] Zhengzhou Univ, Sch Informat Engn, Zhengzhou 450001, Peoples R China
[4] Zhengzhou Univ, Cooperat Innovat Ctr Internet Healthcare, Zhengzhou 450000, Peoples R China
基金
中国博士后科学基金; 中国国家自然科学基金;
关键词
Channel attention; Densely connected network; Image fusion; Spatial attention; PET and MRI images; GENERATIVE ADVERSARIAL NETWORK; QUALITY ASSESSMENT; FRAMEWORK;
D O I
10.1016/j.compbiomed.2022.106339
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
The fusion techniques of different modalities in medical images, e.g., Positron Emission Tomography (PET) and Magnetic Resonance Imaging (MRI), are increasingly significant in many clinical applications by integrating the complementary information from different medical images. In this paper, we propose a novel fusion model based on a dense convolutional network with dual attention (CSpA-DN) for PET and MRI images. In our framework, an encoder composed of the densely connected neural network is constructed to extract features from source images, and a decoder network is employed to generate the fused image from these features. Simultaneously, a dual-attention module is introduced in the encoder and decoder to further integrate local features along with their global dependencies adaptively. In the dual-attention module, a spatial attention block is leveraged to extract features of each point from encoder network by a weighted sum of feature information at all positions. Meanwhile, the interdependent correlation of all image features is aggregated via a module of channel attention. In addition, we design a specific loss function including image loss, structural loss, gradient loss and perception loss to preserve more structural and detail information and sharpen the edges of targets. Our approach facilitates the fused images to not only preserve abundant functional information from PET images but also retain rich detail structures of MRI images. Experimental results on publicly available datasets illustrate the superiorities of CSpA-DN model compared with state-of-the-art methods according to both qualitative observation and objective assessment.
引用
收藏
页数:20
相关论文
共 81 条
[61]   MATR: Multimodal Medical Image Fusion via Multiscale Adaptive Transformer [J].
Tang, Wei ;
He, Fazhi ;
Liu, Yu ;
Duan, Yansong .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2022, 31 :5134-5149
[62]   IMAGE FUSION BY A RATIO OF LOW-PASS PYRAMID [J].
TOET, A .
PATTERN RECOGNITION LETTERS, 1989, 9 (04) :245-253
[63]   A Fast Intensity-Hue-Saturation Fusion Technique With Spectral Adjustment for IKONOS Imagery [J].
Tu, Te-Ming ;
Huang, Ping S. ;
Hung, Chung-Ling ;
Chang, Chien-Ping .
IEEE GEOSCIENCE AND REMOTE SENSING LETTERS, 2004, 1 (04) :309-312
[64]  
Vaswani A, 2017, ADV NEUR IN, V30
[65]   Non-local Neural Networks [J].
Wang, Xiaolong ;
Girshick, Ross ;
Gupta, Abhinav ;
He, Kaiming .
2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, :7794-7803
[66]   Image quality assessment: From error visibility to structural similarity [J].
Wang, Z ;
Bovik, AC ;
Sheikh, HR ;
Simoncelli, EP .
IEEE TRANSACTIONS ON IMAGE PROCESSING, 2004, 13 (04) :600-612
[67]  
Xu H, 2020, AAAI CONF ARTIF INTE, V34, P12484
[68]   U2Fusion: A Unified Unsupervised Image Fusion Network [J].
Xu, Han ;
Ma, Jiayi ;
Jiang, Junjun ;
Guo, Xiaojie ;
Ling, Haibin .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2022, 44 (01) :502-518
[69]  
Xu H, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P3954
[70]   Medical image fusion using multi-level local extrema [J].
Xu, Zhiping .
INFORMATION FUSION, 2014, 19 :38-48