CCT-Unet: A U-Shaped Network Based on Convolution Coupled Transformer for Segmentation of Peripheral and Transition Zones in Prostate MRI

被引:10
作者
Yan, Yifei [1 ]
Liu, Rongzong [2 ]
Chen, Haobo [1 ]
Zhang, Limin [2 ]
Zhang, Qi [1 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Smart Med & AI Based Radiol Technol SMART Lab, Shanghai 200444, Peoples R China
[2] Fudan Univ, Huashan Hosp, Dept Urol, Shanghai 200040, Peoples R China
基金
中国国家自然科学基金;
关键词
MRI; segmentation; prostate; transformer; CCT-Unet;
D O I
10.1109/JBHI.2023.3289913
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The accurate segmentation of prostate region in magnetic resonance imaging (MRI) can provide reliable basis for artificially intelligent diagnosis of prostate cancer. Transformer-based models have been increasingly used in image analysis due to their ability to acquire long-term global contextual features. Although Transformer can provide feature representations of the overall appearance and contour representations at long distance, it does not perform well on small-scale datasets of prostate MRI due to its insensitivity to local variation such as the heterogeneity of the grayscale intensities in the peripheral zone and transition zone across patients; meanwhile, the convolutional neural network (CNN) could retain these local features well. Therefore, a robust prostate segmentation model that can aggregate the characteristics of CNN and Transformer is desired. In this work, a U-shaped network based on the convolution coupled Transformer is proposed for segmentation of peripheral and transition zones in prostate MRI, named the convolution coupled Transformer U-Net (CCT-Unet). The convolutional embedding block is first designed for encoding high-resolution input to retain the edge detail of the image. Then the convolution coupled Transformer block is proposed to enhance the ability of local feature extraction and capture long-term correlation that encompass anatomical information. The feature conversion module is also proposed to alleviate the semantic gap in the process of jumping connection. Extensive experiments have been conducted to compare our CCT-Unet with several state-of-the-art methods on both the ProstateX open dataset and the self-bulit Huashan dataset, and the results have consistently shown the accuracy and robustness of our CCT-Unet in MRI prostate segmentation.
引用
收藏
页码:4341 / 4351
页数:11
相关论文
共 40 条
  • [1] Automatic prostate and prostate zones segmentation of magnetic resonance images using DenseNet-like U-net
    Aldoj, Nader
    Biavati, Federico
    Michallek, Florian
    Stober, Sebastian
    Dewey, Marc
    [J]. SCIENTIFIC REPORTS, 2020, 10 (01)
  • [2] Prostate zones and cancer: lost in transition?
    Ali, Amin
    Du Feu, Alexander
    Oliveira, Pedro
    Choudhury, Ananya
    Bristow, Robert G.
    Baena, Esther
    [J]. NATURE REVIEWS UROLOGY, 2022, 19 (02) : 101 - 115
  • [3] Ba JL., 2016, arXiv
  • [4] SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation
    Badrinarayanan, Vijay
    Kendall, Alex
    Cipolla, Roberto
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2017, 39 (12) : 2481 - 2495
  • [5] Segmentation of the Prostate Transition Zone and Peripheral Zone on MR Images with Deep Learning
    Bardis, Michelle
    Houshyar, Roozbeh
    Chantaduly, Chanon
    Tran-Harding, Karen
    Ushinsky, Alexander
    Chahine, Chantal
    Rupasinghe, Mark
    Chow, Daniel
    Chang, Peter
    [J]. RADIOLOGY-IMAGING CANCER, 2021, 3 (03):
  • [6] Cao Hu, 2023, Computer Vision - ECCV 2022 Workshops: Proceedings. Lecture Notes in Computer Science (13803), P205, DOI 10.1007/978-3-031-25066-8_9
  • [7] Chen J, 2021, arXiv
  • [8] Deep Learning Whole-Gland and Zonal Prostate Segmentation on a Public MRI Dataset
    Cuocolo, Renato
    Comelli, Albert
    Stefano, Alessandro
    Benfante, Viviana
    Dahiya, Navdeep
    Stanzione, Arnaldo
    Castaldo, Anna
    De Lucia, Davide Raffaele
    Yezzi, Anthony
    Imbriaco, Massimo
    [J]. JOURNAL OF MAGNETIC RESONANCE IMAGING, 2021, 54 (02) : 452 - 459
  • [9] CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows
    Dong, Xiaoyi
    Bao, Jianmin
    Chen, Dongdong
    Zhang, Weiming
    Yu, Nenghai
    Yuan, Lu
    Chen, Dong
    Guo, Baining
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 12114 - 12124
  • [10] Dosovitskiy A., 2020, ICLR, V20, DOI 10.48550/arXiv.2010.11929