MHSU-Net: A more versatile neural network for medical image segmentation

被引:16
作者
Ma, Hao [1 ]
Zou, Yanni [1 ]
Liu, Peter X. [2 ]
机构
[1] Nanchang Univ, Sch Informat Engn, Nanchang 330031, Jiangxi, Peoples R China
[2] Carleton Univ, Dept Syst & Comp Engn, Ottawa, ON K1S 5B6, Canada
关键词
Medical image segmentation; Convolutional neural network; U-Net; Multiscale and context;
D O I
10.1016/j.cmpb.2021.106230
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Background and objective: Medical image segmentation plays an important role in clinic. Recently, with the development of deep learning, many convolutional neural network (CNN)-based medical image segmentation algorithms have been proposed. Among them, U-Net is one of the most famous networks. However, the standard convolutional layers used by U-Net limit its capability to capture abundant features. Additionally, the consecutive maximum pooling operations in U-Net cause certain features to be lost. This paper aims to improve the feature extraction capability of U-Net and reduce the feature loss during the segmentation process. Meanwhile, the paper also focuses on improving the versatility of the proposed segmentation model. Methods: Firstly, in order to enable the model to capture richer features, we have proposed a novel multi scale convolutional block (MCB). MCB adopts a wider and deeper structure, which can be applied to different types of segmentation tasks. Secondly, a hybrid down-sampling block (HDSB) has been proposed to reduce the feature loss via replacing the maximum pooling layer. Thirdly, we have proposed a context module (CIF) based on atrous convolution and SKNet to extract sufficient context information. Finally, we combined the CIF module with Skip Connection of U-Net, and further proposed the Skip Connection+ structure. Results: We name the proposed network MHSU-Net. MHSU-Net has been evaluated on three different datasets, including lung, cell contour, and pancreas. Experimental results demonstrate that MHSU-Net outperforms U-Net and other state-of-the-art models under various evaluation metrics, and owns greater potential in clinical applications. Conclusions: The proposed modules can greatly improve the feature extraction capability of the segmentation model and effectively reduce the feature loss during the segmentation process. MHSU-Net can also be applied to different types of medical image segmentation tasks. (c) 2021 Elsevier B.V. All rights reserved.
引用
收藏
页数:10
相关论文
共 42 条
  • [11] Hu J, 2018, PROC CVPR IEEE, P7132, DOI [10.1109/TPAMI.2019.2913372, 10.1109/CVPR.2018.00745]
  • [12] Densely Connected Convolutional Networks
    Huang, Gao
    Liu, Zhuang
    van der Maaten, Laurens
    Weinberger, Kilian Q.
    [J]. 30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, : 2261 - 2269
  • [13] MultiResUNet : Rethinking the U-Net architecture for multimodal biomedical image segmentation
    Ibtehaz, Nabil
    Rahman, M. Sohel
    [J]. NEURAL NETWORKS, 2020, 121 : 74 - 87
  • [14] Ioffe S, 2015, PR MACH LEARN RES, V37, P448
  • [15] Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2015 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2015, : 1026 - 1034
  • [16] Kilrungrotsakul T, 2015, IEEE IMAGE PROC, P3368, DOI 10.1109/ICIP.2015.7351428
  • [17] Kingma DP, 2014, ADV NEUR IN, V27
  • [18] ImageNet Classification with Deep Convolutional Neural Networks
    Krizhevsky, Alex
    Sutskever, Ilya
    Hinton, Geoffrey E.
    [J]. COMMUNICATIONS OF THE ACM, 2017, 60 (06) : 84 - 90
  • [19] Multiscale receptive field based on residual network for pancreas segmentation in CT images
    Li, Feiyan
    Li, Weisheng
    Shu, Yucheng
    Qin, Sheng
    Xiao, Bin
    Zhan, Ziwei
    [J]. BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2020, 57
  • [20] Selective Kernel Networks
    Li, Xiang
    Wang, Wenhai
    Hu, Xiaolin
    Yang, Jian
    [J]. 2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 510 - 519