UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation

被引:366
|
作者
Gao, Yunhe [1 ]
Zhou, Mu [1 ,2 ]
Metaxas, Dimitris N. [1 ]
机构
[1] Rutgers State Univ, Dept Comp Sci, Piscataway, NJ 08854 USA
[2] SenseBrain & Shanghai AI Lab & Ctr Perceptual & I, Shanghai, Peoples R China
关键词
D O I
10.1007/978-3-030-87199-4_6
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transformer architecture has emerged to be successful in a number of natural language processing tasks. However, its applications to medical vision remain largely unexplored. In this study, we present UTNet, a simple yet powerful hybrid Transformer architecture that integrates self-attention into a convolutional neural network for enhancing medical image segmentation. UTNet applies self-attention modules in both encoder and decoder for capturing long-range dependency at different scales with minimal overhead. To this end, we propose an efficient self-attention mechanism along with relative position encoding that reduces the complexity of self-attention operation significantly from O(n(2)) to approximate O(n) . A new self-attention decoder is also proposed to recover fine-grained details from the skipped connections in the encoder. Our approach addresses the dilemma that Transformer requires huge amounts of data to learn vision inductive bias. Our hybrid layer design allows the initialization of Transformer into convolutional networks without a need of pre-training. We have evaluated UTNet on the multi-label, multi-vendor cardiac magnetic resonance imaging cohort. UTNet demonstrates superior segmentation performance and robustness against the state-of-the-art approaches, holding the promise to generalize well on other medical image segmentations.
引用
收藏
页码:61 / 71
页数:11
相关论文
共 50 条
  • [1] Hybrid Transformer and Convolution for Medical Image Segmentation
    Wang, Fan
    Wang, Bo
    2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 2022, : 156 - 159
  • [2] HResFormer: Hybrid Residual Transformer for Volumetric Medical Image Segmentation
    Ren, Sucheng
    Li, Xiaomeng
    IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS, 2025,
  • [3] SEGTRANSVAE: HYBRID CNN - TRANSFORMER WITH REGULARIZATION FOR MEDICAL IMAGE SEGMENTATION
    Quan-Dung Pham
    Hai Nguyen-Truong
    Nam Nguyen Phuong
    Nguyen, Khoa N. A.
    Nguyen, Chanh D. T.
    Bui, Trung
    Truong, Steven Q. H.
    2022 IEEE INTERNATIONAL SYMPOSIUM ON BIOMEDICAL IMAGING (IEEE ISBI 2022), 2022,
  • [4] ATTransUNet: An enhanced hybrid transformer architecture for ultrasound and histopathology image segmentation
    Li, Xuewei
    Pang, Shuo
    Zhang, Ruixuan
    Zhu, Jialin
    Fu, Xuzhou
    Tian, Yuan
    Gao, Jie
    COMPUTERS IN BIOLOGY AND MEDICINE, 2023, 152
  • [5] CNN-Transformer Hybrid Architecture for Underwater Sonar Image Segmentation
    Lei, Juan
    Wang, Huigang
    Lei, Zelin
    Li, Jiayuan
    Rong, Shaowei
    REMOTE SENSING, 2025, 17 (04)
  • [6] FromUNet to Transformer: Progress in the Application of Hybrid Models in Medical Image Segmentation
    Yin, Yixiao
    Ma, Jingang
    Zhang, Wenkai
    Jiang, Liang
    LASER & OPTOELECTRONICS PROGRESS, 2025, 62 (02)
  • [7] A hybrid enhanced attention transformer network for medical ultrasound image segmentation
    Jiang, Tao
    Xing, Wenyu
    Yu, Ming
    Ta, Dean
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 86
  • [8] Slimmable transformer with hybrid axial-attention for medical image segmentation
    Hu Y.
    Mu N.
    Liu L.
    Zhang L.
    Jiang J.
    Li X.
    Computers in Biology and Medicine, 2024, 173
  • [9] Enhanced transformer encoder and hybrid cascaded upsampler for medical image segmentation
    Li, Chaoqun
    Wang, Liejun
    Cheng, Shuli
    EXPERT SYSTEMS WITH APPLICATIONS, 2024, 238
  • [10] TFCNs: A CNN-Transformer Hybrid Network for Medical Image Segmentation
    Li, Zihan
    Li, Dihan
    Xu, Cangbai
    Wang, Weice
    Hong, Qingqi
    Li, Qingde
    Tian, Jie
    ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2022, PT IV, 2022, 13532 : 781 - 792