DCSAU-Net: A deeper and more compact split-attention U-Net for medical image segmentation

被引:194
作者
Xu, Qing [1 ]
Ma, Zhicheng [2 ]
He, Na [3 ]
Duan, Wenting [1 ]
机构
[1] Univ Lincoln, Sch Comp Sci, Lincoln LN6, Lincs, England
[2] Zhejiang Gongshang Univ, Coll Comp Sci & Technol, Hangzhou 310018, Zhejiang, Peoples R China
[3] Zhejiang Wanli Univ, Sino German Inst Design & Commun, Ningbo 315100, Zhejiang, Peoples R China
关键词
Medical image segmentation; Multi-scale fusion attention; Depthwise separable convolution; Computer-aided diagnosis;
D O I
10.1016/j.compbiomed.2023.106626
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
Deep learning architecture with convolutional neural network achieves outstanding success in the field of computer vision. Where U-Net has made a great breakthrough in biomedical image segmentation and has been widely applied in a wide range of practical scenarios. However, the equal design of every downsampling layer in the encoder part and simply stacked convolutions do not allow U-Net to extract sufficient information of features from different depths. The increasing complexity of medical images brings new challenges to the existing methods. In this paper, we propose a deeper and more compact split-attention u-shape network, which efficiently utilises low-level and high-level semantic information based on two frameworks: primary feature conservation and compact split-attention block. We evaluate the proposed model on CVC-ClinicDB, 2018 Data Science Bowl, ISIC-2018, SegPC-2021 and BraTS-2021 datasets. As a result, our proposed model displays better performance than other state-of-the-art methods in terms of the mean intersection over union and dice coefficient. More significantly, the proposed model demonstrates excellent segmentation performance on challenging images. The code for our work and more technical details can be found at https://github.com/ xq141839/DCSAU-Net.
引用
收藏
页数:10
相关论文
共 52 条
[1]  
Baid U, 2021, Arxiv, DOI [arXiv:2107.02314, 10.48550/arXiv:2107.02314]
[2]   Data Descriptor: Advancing The Cancer Genome Atlas glioma MRI collections with expert segmentation labels and radiomic features [J].
Bakas, Spyridon ;
Akbari, Hamed ;
Sotiras, Aristeidis ;
Bilello, Michel ;
Rozycki, Martin ;
Kirby, Justin S. ;
Freymann, John B. ;
Farahani, Keyvan ;
Davatzikos, Christos .
SCIENTIFIC DATA, 2017, 4
[3]   WM-DOVA maps for accurate polyp highlighting in colonoscopy: Validation vs. saliency maps from physicians [J].
Bernal, Jorge ;
Javier Sanchez, F. ;
Fernandez-Esparrach, Gloria ;
Gil, Debora ;
Rodriguez, Cristina ;
Vilarino, Fernando .
COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2015, 43 :99-111
[4]   Nucleus segmentation across imaging experiments: the 2018 Data Science Bowl [J].
Caicedo, Juan C. ;
Goodman, Allen ;
Karhohs, Kyle W. ;
Cimini, Beth A. ;
Ackerman, Jeanelle ;
Haghighi, Marzieh ;
Heng, CherKeng ;
Becker, Tim ;
Minh Doan ;
McQuin, Claire ;
Rohban, Mohammad ;
Singh, Shantanu ;
Carpenter, Anne E. .
NATURE METHODS, 2019, 16 (12) :1247-+
[5]  
Chen J., 2021, arXiv
[6]   Targeted Gradient Descent: A Novel Method for Convolutional Neural Networks Fine-Tuning and Online-Learning [J].
Chen, Junyu ;
Asma, Evren ;
Chan, Chung .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT III, 2021, 12903 :25-35
[7]  
Chen K, 2019, Arxiv, DOI arXiv:1906.07155
[8]   Multi-task Attention-Based Semi-supervised Learning for Medical Image Segmentation [J].
Chen, Shuai ;
Bortsova, Gerda ;
Juarez, Antonio Garcia-Uceda ;
van Tulder, Gijs ;
de Bruijne, Marleen .
MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2019, PT III, 2019, 11766 :457-465
[9]   Learning Active Contour Models for Medical Image Segmentation [J].
Chen, Xu ;
Williams, Bryan M. ;
Vallabhaneni, Srinivasa R. ;
Czanner, Gabriela ;
Williams, Rachel ;
Zheng, Yalin .
2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, :11624-11632
[10]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807