U-Net Transformer: Self and Cross Attention for Medical Image Segmentation

被引:158
|
作者
Petit, Olivier [1 ,2 ]
Thome, Nicolas [1 ]
Rambour, Clement [1 ]
Themyr, Loic [1 ]
Collins, Toby [3 ]
Soler, Luc [2 ]
机构
[1] CEDRIC Conservatoire Natl Arts & Metiers, Paris, France
[2] Visible Patient SAS, Strasbourg, France
[3] IRCAD, Strasbourg, France
来源
MACHINE LEARNING IN MEDICAL IMAGING, MLMI 2021 | 2021年 / 12966卷
关键词
Medical image segmentation; Transformers; Self-attention; Cross-attention; Spatial layout; Global interactions;
D O I
10.1007/978-3-030-87589-3_28
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Medical image segmentation remains particularly challenging for complex and low-contrast anatomical structures. In this paper, we introduce the U-Transformer network, which combines a U-shaped architecture for image segmentation with self- and cross-attention from Transformers. U-Transformer overcomes the inability of U-Nets to model long-range contextual interactions and spatial dependencies, which are arguably crucial for accurate segmentation in challenging contexts. To this end, attention mechanisms are incorporated at two main levels: a selfattention module leverages global interactions between encoder features, while cross-attention in the skip connections allows a fine spatial recovery in the U-Net decoder by filtering out non-semantic features. Experiments on two abdominal CT-image datasets show the large performance gain brought out by U-Transformer compared to U-Net and local Attention U-Nets. We also highlight the importance of using both self- and cross-attention, and the nice interpretability features brought out by UTransformer.
引用
收藏
页码:267 / 276
页数:10
相关论文
共 50 条
  • [1] MIXED TRANSFORMER U-NET FOR MEDICAL IMAGE SEGMENTATION
    Wang, Hongyi
    Xie, Shiao
    Lin, Lanfen
    Iwamoto, Yutaro
    Han, Xian-Hua
    Chen, Yen-Wei
    Tong, Ruofeng
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 2390 - 2394
  • [2] Diffusion Transformer U-Net for Medical Image Segmentation
    Chowdary, G. Jignesh
    Yin, Zhaozheng
    MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION, MICCAI 2023, PT IV, 2023, 14223 : 622 - 631
  • [3] Cross Pyramid Transformer makes U-net stronger in medical image segmentation
    Zhu, Jinghua
    Sheng, Yue
    Cui, Hui
    Ma, Jiquan
    Wang, Jijian
    Xi, Heran
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 86
  • [4] Enhancing medical image segmentation with a multi-transformer U-Net
    Dan, Yongping
    Jin, Weishou
    Yue, Xuebin
    Wang, Zhida
    PEERJ, 2024, 12
  • [5] SAU-Net: Medical Image Segmentation Method Based on U-Net and Self-Attention
    Zhang S.-J.
    Peng Z.
    Li H.
    Tien Tzu Hsueh Pao/Acta Electronica Sinica, 2022, 50 (10): : 2433 - 2442
  • [6] TransAttUnet: Multi-Level Attention-Guided U-Net With Transformer for Medical Image Segmentation
    Chen, Bingzhi
    Liu, Yishu
    Zhang, Zheng
    Lu, Guangming
    Kong, Adams Wai Kin
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE, 2024, 8 (01): : 55 - 68
  • [7] Hybrid dilation and attention residual U-Net for medical image segmentation
    Wang, Zekun
    Zou, Yanni
    Liu, Peter X.
    COMPUTERS IN BIOLOGY AND MEDICINE, 2021, 134
  • [8] Hybrid Swin Deformable Attention U-Net for Medical Image Segmentation
    Wang, Lichao
    Huang, Jiahao
    Xing, Xiaodan
    Yang, Guang
    2023 19TH INTERNATIONAL SYMPOSIUM ON MEDICAL INFORMATION PROCESSING AND ANALYSIS, SIPAIM, 2023,
  • [9] U-Net vs Transformer: Is U-Net Outdated in Medical Image Registration?
    Jia, Xi
    Bartlett, Joseph
    Zhang, Tianyang
    Lu, Wenqi
    Qiu, Zhaowen
    Duan, Jinming
    MACHINE LEARNING IN MEDICAL IMAGING, MLMI 2022, 2022, 13583 : 151 - 160
  • [10] DRA U-Net: An Attention based U-Net Framework for 2D Medical Image Segmentation
    Zhang, Xian
    Feng, Ziyuan
    Zhong, Tianchi
    Shen, Sicheng
    Zhang, Ruolin
    Zhou, Lijie
    Zhang, Bo
    Wang, Wendong
    2021 IEEE INTERNATIONAL CONFERENCE ON BIG DATA (BIG DATA), 2021, : 3936 - 3942