Dilated multi-scale residual attention (DMRA) U-Net: three-dimensional (3D) dilated multi-scale residual attention U-Net for brain tumor segmentation

被引:0
作者
Zhang, Lihong [1 ]
Li, Yuzhuo [1 ]
Liang, Yingbo [1 ]
Xu, Chongxin [1 ]
Liu, Tong [1 ]
Sun, Junding [1 ]
机构
[1] Henan Polytech Univ, Coll Comp Sci & Technol, 2001 Century Ave, Jiaozuo 454003, Peoples R China
基金
中国国家自然科学基金;
关键词
Brain tumor segmentation (BraTS); U-Net; three-dimensional (3D); multi-scale convolution residual modules (MCR modules); CONVOLUTIONAL NEURAL-NETWORKS; MRI;
D O I
10.21037/qims-24-779
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
Background: The precise identification of the position and form of a tumor mass can improve early diagnosis and treatment. However, due to the complicated tumor categories and varying sizes and forms, the segregation of brain gliomas and their internal sub-regions is still very challenging. This study sought to design a new deep-learning network based on three-dimensional (3D) U-Net to address its shortcomings in brain tumor segmentation (BraTS) tasks. Methods: We developed a 3D dilated multi-scale residual attention U-Net (DMRA-U-Net) model for magnetic resonance imaging (MRI) BraTS. It used dilated convolution residual (DCR) modules to better process shallow features, multi-scale convolution residual (MCR) modules in the bottom encoding path to create richer and more comprehensive feature expression while reducing overall information loss or blurring, and a channel attention (CA) module between the encoding and decoding paths to address the problem of retrieving and preserving important features during the processing of deep feature maps. Results: The BraTS 2018-2021 datasets served as the training and evaluation datasets for this study. Further, the proposed architecture was assessed using metrics such as the dice similarity coefficient (DSC), Hausdorff distance (HD), and sensitivity (Sens). The DMRA U-Net model segments the whole tumor (WT), and the tumor core (TC), and the enhancing tumor (ET) regions of brain tumors. Using the suggested architecture, the DSCs were 0.9012, 0.8867, and 0.8813, the HDs were 28.86, 13.34, and 10.88 mm, and the Sens was 0.9429, 0.9452, and 0.9303 for the WT, TC, and ET regions, respectively. Compared to the traditional 3D U-Net, the DSC of the DMRA U-Net increased by 4.5%, 2.5%, and 0.8%, the HD of the DMRA U-Net decreased by 21.83, 16.42, and 10.00, the Sens of the DMRA U-Net increased by 0.4%, 0.7%, and 1.4% for the WT, TC, and ET regions, respectively. Further, the results of the statistical comparison of the performance indicators revealed that our model performed well generally in the segmentation of the WT, TC, and ET regions. Conclusions: We developed a promising tumor segmentation model.
引用
收藏
页码:7249 / 7264
页数:16
相关论文
共 36 条
[1]   Context Aware 3D UNet for Brain Tumor Segmentation [J].
Ahmad, Parvez ;
Qamar, Saqib ;
Shen, Linlin ;
Saeed, Adnan .
BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES (BRAINLES 2020), PT I, 2021, 12658 :207-218
[2]   Single level UNet3D with multipath residual attention block for brain tumor segmentation [J].
Akbar, Agus Subhan ;
Fatichah, Chastine ;
Suciati, Nanik .
JOURNAL OF KING SAUD UNIVERSITY-COMPUTER AND INFORMATION SCIENCES, 2022, 34 (06) :3247-3258
[3]   Recurrent residual U-Net for medical image segmentation [J].
Alom, Md Zahangir ;
Yakopcic, Chris ;
Hasan, Mahmudul ;
Taha, Tarek M. ;
Asari, Vijayan K. .
JOURNAL OF MEDICAL IMAGING, 2019, 6 (01)
[4]   Data Descriptor: Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features [J].
Bakas, Spyridon ;
Akbari, Hamed ;
Sotiras, Aristeidis ;
Bilello, Michel ;
Rozycki, Martin ;
Kirby, Justin S. ;
Freymann, John B. ;
Farahani, Keyvan ;
Davatzikos, Christos .
SCIENTIFIC DATA, 2017, 4
[5]   Data Augmentation for Improved Brain Tumor Segmentation [J].
Biswas, Ankur ;
Bhattacharya, Paritosh ;
Maity, Santi P. ;
Banik, Rita .
IETE JOURNAL OF RESEARCH, 2023, 69 (05) :2772-2782
[6]   Automatic assessment of glioma burden: a deep learning algorithm for fully automated volumetric and bidimensional measurement [J].
Chang, Ken ;
Beers, Andrew L. ;
Bai, Harrison X. ;
Brown, James M. ;
Ly, K. Ina ;
Li, Xuejun ;
Senders, Joeky T. ;
Kavouridis, Vasileios K. ;
Boaro, Alessandro ;
Su, Chang ;
Bi, Wenya Linda ;
Rapalino, Otto ;
Liao, Weihua ;
Shen, Qin ;
Zhou, Hao ;
Xiao, Bo ;
Wang, Yinyan ;
Zhang, Paul J. ;
Pinho, Marco C. ;
Wen, Patrick Y. ;
Batchelor, Tracy T. ;
Boxerman, Jerrold L. ;
Arnaout, Omar ;
Rosen, Bruce R. ;
Gerstner, Elizabeth R. ;
Yang, Li ;
Huang, Raymond Y. ;
Kalpathy-Cramer, Jayashree .
NEURO-ONCOLOGY, 2019, 21 (11) :1412-1422
[7]  
Chhikara BS, 2023, CHEM BIOL LETT, V10
[8]  
Cicek Ozgun, 2016, Medical Image Computing and Computer-Assisted Intervention - MICCAI 2016. 19th International Conference. Proceedings: LNCS 9901, P424, DOI 10.1007/978-3-319-46723-8_49
[9]   Automatic Brain Tumor Detection and Segmentation Using U-Net Based Fully Convolutional Networks [J].
Dong, Hao ;
Yang, Guang ;
Liu, Fangde ;
Mo, Yuanhan ;
Guo, Yike .
MEDICAL IMAGE UNDERSTANDING AND ANALYSIS (MIUA 2017), 2017, 723 :506-517
[10]   Dual Attention Network for Scene Segmentation [J].
Fu, Jun ;
Liu, Jing ;
Tian, Haijie ;
Li, Yong ;
Bao, Yongjun ;
Fang, Zhiwei ;
Lu, Hanqing .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :3141-3149