TransDoubleU-Net: Dual Scale Swin Transformer With Dual Level Decoder for 3D Multimodal Brain Tumor Segmentation

被引:0
|
作者
Vatanpour, Marjan [1 ]
Haddadnia, Javad [1 ]
机构
[1] Hakim Sabzevari Univ, Dept Biomed Engn, Sabzevar 9618676115, Iran
关键词
Brain tumor segmentation; vision transformer; Swin transformer; dual scale; U-Net; BraTS; IMAGE; ARCHITECTURE; ATTENTION;
D O I
10.1109/ACCESS.2023.3330958
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Segmenting brain tumors in MR modalities is an important step in treatment planning. Recently, the majority of methods rely on Fully Convolutional Neural Networks (FCNNs) that have acceptable results for this task. Among various networks, the U-shaped architecture known as U-Net, has gained enormous success in medical image segmentation. However, absence of long-range association and the locality of convolutional layers in FCNNs can create issues in tumor segmentation with different tumor sizes. Due to the success of Transformers in natural language processing (NLP) as a result of using self-attention mechanism to model global information, some studies designed different variations of vision based U-Shaped Transformers. So, to get the effectiveness of U-Net we proposed TransDoubleU-Net which consists of double U-shaped nets for 3D MR Modality segmentation of brain images based on dual scale Swin Transformer for the encoder part and dual level decoder based on CNN and Transformers for better localization of features. The model's core uses the shifted windows multi-head self-attention of Swin Transformer and skip connections to CNN based decoder. The outputs are evaluated on BraTS2019 and BraTS2020 datasets and showed promising results in segmentation.
引用
收藏
页码:125511 / 125518
页数:8
相关论文
共 50 条
  • [1] SwinBTS: A Method for 3D Multimodal Brain Tumor Segmentation Using Swin Transformer
    Jiang, Yun
    Zhang, Yuan
    Lin, Xin
    Dong, Jinkun
    Cheng, Tongtong
    Liang, Jing
    BRAIN SCIENCES, 2022, 12 (06)
  • [2] A 3D Dual Encoder Mirror Difference ResU-Net for Multimodal Brain Tumor Segmentation
    Xing, Qiwei
    Li, Zhihua
    Jing, Yongxia
    Chen, Xiaolin
    IEEE ACCESS, 2025, 13 : 1621 - 1635
  • [3] DenseTrans: Multimodal Brain Tumor Segmentation Using Swin Transformer
    ZongRen, Li
    Silamu, Wushouer
    Yuzhen, Wang
    Zhe, Wei
    IEEE ACCESS, 2023, 11 : 42895 - 42908
  • [4] DiffSwinTr: A diffusion model using 3D Swin Transformer for brain tumor segmentation
    Zhu, Junan
    Zhu, Hongxin
    Jia, Zhaohong
    Ma, Ping
    INTERNATIONAL JOURNAL OF IMAGING SYSTEMS AND TECHNOLOGY, 2024, 34 (03)
  • [5] Brain Tumor Segmentation Using Dual-Path Attention U-Net in 3D MRI Images
    Jun, Wen
    Xu, Haoxiang
    Wang, Zhang
    BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES (BRAINLES 2020), PT I, 2021, 12658 : 183 - 193
  • [6] Using Separated Inputs for Multimodal Brain Tumor Segmentation with 3D U-Net-like Architectures
    Boutry, N.
    Chazalon, J.
    Puybareau, E.
    Tochon, G.
    Talbot, H.
    Geraud, T.
    BRAINLESION: GLIOMA, MULTIPLE SCLEROSIS, STROKE AND TRAUMATIC BRAIN INJURIES (BRAINLES 2019), PT I, 2020, 11992 : 187 - 199
  • [7] 3D Swin Transformer for Partial Medical Auto Segmentation
    Rangnekar, Aneesh
    Jiang, Jue
    Veeraraghavan, Harini
    FAST, LOW-RESOURCE, AND ACCURATE ORGAN AND PAN-CANCER SEGMENTATION IN ABDOMEN CT, FLARE 2023, 2024, 14544 : 222 - 235
  • [8] dResU-Net: 3D deep residual U-Net based brain tumor segmentation from multimodal MRI
    Raza, Rehan
    Bajwa, Usama Ijaz
    Mehmood, Yasar
    Anwar, Muhammad Waqas
    Jamal, M. Hassan
    BIOMEDICAL SIGNAL PROCESSING AND CONTROL, 2023, 79
  • [9] MRI Brain Tumor Segmentation Using 3D U-Net with Dense Encoder Blocks and Residual Decoder Blocks
    Tie, Juhong
    Peng, Hui
    Zhou, Jiliu
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2021, 128 (02): : 427 - 445
  • [10] 3D-DDA: 3D DUAL-DOMAIN ATTENTION FOR BRAIN TUMOR SEGMENTATION
    Nhu-Tai Do
    Hoang-Son Vo-Thanh
    Tram-Tran Nguyen-Quynh
    Kim, Soo-Hyung
    2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2023, : 3215 - 3219