Multimodal Multi-Head Convolutional Attention with Various Kernel Sizes for Medical Image Super-Resolution

被引:64
作者
Georgescu, Mariana-Iuliana [1 ]
Ionescu, Radu Tudor [1 ]
Miron, Andreea-Iuliana [2 ,3 ]
Savencu, Olivian [2 ,3 ]
Ristea, Nicolae-Catalin [1 ,4 ]
Verga, Nicolae [2 ,3 ]
Khan, Fahad Shahbaz [5 ,6 ]
机构
[1] Univ Bucharest, Bucharest, Romania
[2] Carol Davila Univ Med & Pharm, Bucharest, Romania
[3] Coltea Hosp, Bucharest, Romania
[4] Univ Politehn Bucuresti, Bucharest, Romania
[5] MBZ Univ Artificial Intelligence, Abu Dhabi, U Arab Emirates
[6] Linkoping Univ, Linkoping, Sweden
来源
2023 IEEE/CVF WINTER CONFERENCE ON APPLICATIONS OF COMPUTER VISION (WACV) | 2023年
关键词
CONTRAST SUPERRESOLUTION; MRI; NETWORKS; SINGLE; CT;
D O I
10.1109/WACV56688.2023.00223
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Super-resolving medical images can help physicians in providing more accurate diagnostics. In many situations, computed tomography (CT) or magnetic resonance imaging (MRI) techniques capture several scans (modes) during a single investigation, which can jointly be used (in a multimodal fashion) to further boost the quality of super-resolution results. To this end, we propose a novel multimodal multi-head convolutional attention module to super-resolve CT and MRI scans. Our attention module uses the convolution operation to perform joint spatial-channel attention on multiple concatenated input tensors, where the kernel (receptive field) size controls the reduction rate of the spatial attention, and the number of convolutional filters controls the reduction rate of the channel attention, respectively. We introduce multiple attention heads, each head having a distinct receptive field size corresponding to a particular reduction rate for the spatial attention. We integrate our multimodal multi-head convolutional attention (MMHCA) into two deep neural architectures for super-resolution and conduct experiments on three data sets. Our empirical results show the superiority of our attention module over the state-of-the-art attention mechanisms used in super-resolution. Moreover, we conduct an ablation study to assess the impact of the components involved in our attention module, e.g. the number of inputs or the number of heads. Our code is freely available at https://github.com/lilygeorgescu/MHCA.
引用
收藏
页码:2194 / 2204
页数:11
相关论文
共 54 条
[1]  
[Anonymous], 2018, P MICCAI, DOI DOI 10.1007/978-3-030-00928-1_11
[2]   Accurate and Efficient Intracranial Hemorrhage Detection and Subtype Classification in 3D CT Scans with Convolutional and Long Short-Term Memory Neural Networks [J].
Burduja, Mihail ;
Ionescu, Radu Tudor ;
Verga, Nicolae .
SENSORS, 2020, 20 (19) :1-21
[3]   End-to-End Object Detection with Transformers [J].
Carion, Nicolas ;
Massa, Francisco ;
Synnaeve, Gabriel ;
Usunier, Nicolas ;
Kirillov, Alexander ;
Zagoruyko, Sergey .
COMPUTER VISION - ECCV 2020, PT I, 2020, 12346 :213-229
[4]  
Chen Jieneng, 2021, arXiv:2102.04306
[5]   Image Super-Resolution Using Deep Convolutional Networks [J].
Dong, Chao ;
Loy, Chen Change ;
He, Kaiming ;
Tang, Xiaoou .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2016, 38 (02) :295-307
[6]  
Dosovitskiy A., 2021, INT C LEARNING REPRE
[7]   Super-resolution reconstruction of single anisotropic 3D MR images using residual convolutional neural network [J].
Du, Jinglong ;
He, Zhongshi ;
Wang, Lulu ;
Gholipour, Ali ;
Zhou, Zexun ;
Chen, Dingding ;
Jia, Yuanyuan .
NEUROCOMPUTING, 2020, 392 :209-220
[8]  
Du JL, 2018, IEEE INT C BIOINFORM, P349, DOI 10.1109/BIBM.2018.8621073
[9]   Gradient-Guided Convolutional Neural Network for MRI Image Super-Resolution [J].
Du, Xiaofeng ;
He, Yifan .
APPLIED SCIENCES-BASEL, 2019, 9 (22)
[10]   Fabricating Ag/PW12/Zr-mTiO2 Composite via Doping and Interface Engineering: An Efficient Catalyst with Bifunctionality in Photo- and Electro-Driven Nitrogen Reduction Reactions [J].
Feng, Caiting ;
Liu, Jiquan ;
Li, Qinlong ;
Ji, Lei ;
Wu, Panfeng ;
Yuan, Xiaoxiao ;
Hu, Huaiming ;
Jiang, Hai-Ying ;
Xue, Ganglin .
ADVANCED SUSTAINABLE SYSTEMS, 2022, 6 (01)