DENSELY CONNECTED SWIN-UNET FOR MULTISCALE INFORMATION AGGREGATION IN MEDICAL IMAGE SEGMENTATION

被引:8
作者
Wang, Ziyang [1 ]
Su, Meiwen [2 ]
Zheng, Jian-Qing [3 ]
Liu, Yang [4 ]
机构
[1] Univ Oxford, Dept Comp Sci, Oxford, England
[2] Univ Hong Kong, Dept Stat & Actuarial Sci, Hong Kong, Peoples R China
[3] Univ Oxford, Kennedy Inst Rheumatol, Oxford, England
[4] Univ Plymouth, Dept Comp Sci, Plymouth, Devon, England
来源
2023 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP | 2023年
关键词
Semantic Segmentation; UNet; Vision Transformer;
D O I
10.1109/ICIP49359.2023.10222451
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Image semantic segmentation is a dense prediction task in computer vision that is dominated by deep learning techniques in recent years. UNet, which is a symmetric encoder-decoder end-to-end Convolutional Neural Network (CNN) with skip connections, has shown promising performance. Aiming to process the multiscale feature information efficiently, we propose a new Densely Connected Swin-UNet (DCS-UNet) with multiscale information aggregation for medical image segmentation. Firstly, inspired by Swin-Transformer to model long-range dependencies via shift-window-based self-attention, this work proposes the use of fully ViT-based network blocks with a shift-window approach, resulting in a purely self-attention-based U-shape segmentation network. The relevant layers including feature sampling and image tokenization are re-designed to align with the ViT fashion. Secondly, a full-scale deep supervision scheme is developed to process the aggregated feature map with various resolutions generated by different levels of decoders. Thirdly, dense skip connections are proposed that allow the semantic feature information to be thoroughly transferred from different levels of encoders to lower level decoders. Our proposed method is validated on a public benchmark MRI Cardiac segmentation data set with comprehensive validation metrics showing competitive performance against other variant encoder-decoder networks. The code is available at https://github.com/ziyangwang007/VIT4UNet.
引用
收藏
页码:940 / 944
页数:5
相关论文
共 36 条
[1]  
Alom M. Z., 2018, CoRR
[2]  
[Anonymous], 2017, P CVPR
[3]  
[Anonymous], 2022, MIUA, V2, P5
[4]  
[Anonymous], 2022, MICCAI WORKSH, DOI DOI 10.1109/WIPDAEUROPE55971.2022.9936092
[5]   Deep Learning Techniques for Automatic MRI Cardiac Multi-Structures Segmentation and Diagnosis: Is the Problem Solved? [J].
Bernard, Olivier ;
Lalande, Alain ;
Zotti, Clement ;
Cervenansky, Frederick ;
Yang, Xin ;
Heng, Pheng-Ann ;
Cetin, Irem ;
Lekadir, Karim ;
Camara, Oscar ;
Gonzalez Ballester, Miguel Angel ;
Sanroma, Gerard ;
Napel, Sandy ;
Petersen, Steffen ;
Tziritas, Georgios ;
Grinias, Elias ;
Khened, Mahendra ;
Kollerathu, Varghese Alex ;
Krishnamurthi, Ganapathy ;
Rohe, Marc-Michel ;
Pennec, Xavier ;
Sermesant, Maxime ;
Isensee, Fabian ;
Jaeger, Paul ;
Maier-Hein, Klaus H. ;
Full, Peter M. ;
Wolf, Ivo ;
Engelhardt, Sandy ;
Baumgartner, Christian F. ;
Koch, Lisa M. ;
Wolterink, Jelmer M. ;
Isgum, Ivana ;
Jang, Yeonggul ;
Hong, Yoonmi ;
Patravali, Jay ;
Jain, Shubham ;
Humbert, Olivier ;
Jodoin, Pierre-Marc .
IEEE TRANSACTIONS ON MEDICAL IMAGING, 2018, 37 (11) :2514-2525
[6]  
Cao H., 2021, arXiv:2105.05537
[7]  
Chen J., 2021, arXiv:2102.04306
[8]  
Dalmaz Onat, 2022, IEEE TMI
[9]  
Deng J, 2009, PROC CVPR IEEE, P248, DOI 10.1109/CVPRW.2009.5206848
[10]  
Dosovitskiy A., 2020, ICLR 2021