TransUNet: Rethinking the U-Net architecture design for medical image segmentation through the lens of transformers

被引:70
作者
Chen, Jieneng [1 ]
Mei, Jieru [1 ]
Li, Xianhang [2 ]
Lu, Yongyi [1 ]
Yu, Qihang [1 ,3 ]
Wei, Qingyue
Luo, Xiangde [4 ]
Xie, Yutong [8 ]
Adeli, Ehsan [5 ]
Wang, Yan [7 ]
Lungren, Matthew P. [5 ]
Zhang, Shaoting [4 ]
Xing, Lei [3 ]
Lu, Le [6 ]
Yuille, Alan [1 ]
Zhou, Yuyin [2 ]
机构
[1] Johns Hopkins Univ, Dept Comp Sci, Baltimore, MD 21218 USA
[2] Univ Calif Santa Cruz, Dept Comp Sci & Engn, Santa Cruz, CA 95064 USA
[3] Stanford Univ, Dept Radiat Oncol, Stanford, CA 94305 USA
[4] Shanghai AI Lab, Shanghai 200000, Peoples R China
[5] Stanford Univ, Sch Med, Stanford, CA 94305 USA
[6] Alibaba Grp, DAMO Acad, New York, NY 10014 USA
[7] East China Normal Univ, Shanghai 200062, Peoples R China
[8] Univ Adelaide, Australian Inst Machine Learning, Adelaide, Australia
关键词
Medical image segmentation; Vision Transformers; U-Net;
D O I
10.1016/j.media.2024.103280
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Medical image segmentation is crucial for healthcare, yet convolution-based methods like U-Net face limitations in modeling long-range dependencies. To address this, Transformers designed for sequence-to-sequence predictions have been integrated into medical image segmentation. However, a comprehensive understanding of Transformers' self-attention in U-Net components is lacking. TransUNet, first introduced in 2021, is widely recognized as one of the first models to integrate Transformer into medical image analysis. In this study, we present the versatile framework of TransUNet that encapsulates Transformers' self-attention into two key modules: (1) a Transformer encoder tokenizing image patches from a convolution neural network (CNN) feature map, facilitating global context extraction, and (2) a Transformer decoder refining candidate regions through cross-attention between proposals and U-Net features. These modules can be flexibly inserted into the U-Net backbone, resulting in three configurations: Encoder-only, Decoder-only, and Encoder+Decoder. TransUNet provides a library encompassing both 2D and 3D implementations, enabling users to easily tailor the chosen architecture. Our findings highlight the encoder's efficacy in modeling interactions among multiple abdominal organs and the decoder's strength in handling small targets like tumors. It excels in diverse medical applications, such as multi-organ segmentation, pancreatic tumor segmentation, and hepatic vessel segmentation. Notably, our TransUNet achieves a significant average Dice improvement of 1.06% and 4.30% for multi-organ segmentation and pancreatic tumor segmentation, respectively, when compared to the highly competitive nn-UNet, and surpasses the top-1 solution in the BrasTS2021 challenge. 2D/3D Code and models are available at https://github.com/Beckschen/TransUNet and https://github.com/Beckschen/TransUNet-3D, respectively.
引用
收藏
页数:10
相关论文
共 44 条
  • [1] Antonelli M, 2021, Arxiv, DOI arXiv:2106.05735
  • [2] Cao Hu, 2023, Computer Vision - ECCV 2022 Workshops: Proceedings. Lecture Notes in Computer Science (13803), P205, DOI 10.1007/978-3-031-25066-8_9
  • [3] Carion N., 2020, LNCS, V12346, P213, DOI [DOI 10.1007/978-3-030-58452-813, 10.1007/978- 3- 030-58452-8 13, DOI 10.1007/978-3-030-58452-8_13]
  • [4] Chen J., 2021, arXiv preprint: arXiv:2102.04306
  • [5] Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation
    Chen, Liang-Chieh
    Zhu, Yukun
    Papandreou, George
    Schroff, Florian
    Adam, Hartwig
    [J]. COMPUTER VISION - ECCV 2018, PT VII, 2018, 11211 : 833 - 851
  • [6] Cheng B, 2021, ADV NEUR IN, V34
  • [7] Masked-attention Mask Transformer for Universal Image Segmentation
    Cheng, Bowen
    Misra, Ishan
    Schwing, Alexander G.
    Kirillov, Alexander
    Girdhar, Rohit
    [J]. 2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2022), 2022, : 1280 - 1289
  • [8] Child R, 2019, Arxiv, DOI arXiv:1904.10509
  • [9] Dosovitskiy A., 2021, INT C LEARNING REPRE
  • [10] Swin UNETR: Swin Transformers for Semantic Segmentation of Brain Tumors in MRI Images
    Hatamizadeh, Ali
    Nath, Vishwesh
    Tang, Yucheng
    Yang, Dong
    Roth, Holger R.
    Xu, Daguang
    [J]. BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES, BRAINLES 2021, PT I, 2022, 12962 : 272 - 284