HCT-Unet: multi-target medical image segmentation via a hybrid CNN-transformer Unet incorporating multi-axis gated multi-layer perceptron

被引:5
作者
Fan, Yazhuo [1 ]
Song, Jianhua [1 ]
Yuan, Lei [1 ,2 ]
Jia, Yunlin [2 ]
机构
[1] Minnan Normal Univ, Sch Phys & Informat Engn, Key Lab Light Field Manipulat & Syst Integrat App, Zhangzhou 363000, Peoples R China
[2] Jinchuan Grp Copper & Precious Met Co Ltd, Jinchang 737100, Peoples R China
关键词
Medical image segmentation; CNN; Transformer; Multi-scale fusion; GMLP; NET; NETWORKS;
D O I
10.1007/s00371-024-03612-y
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
In recent years, for the purpose of integrating the individual strengths of convolutional neural networks (CNN) and Transformer, a network structure has been built to integrate the two methods in medical image segmentation. But most of the methods only integrate CNN and Transformer at a single level and cannot extract low-level detail features and high-level abstract information simultaneously. Meanwhile, this structure lacks flexibility, unable to dynamically adjust the contributions of different feature maps. To address these limitations, we introduce HCT-Unet, a hybrid CNN-Transformer model specifically designed for multi-organ medical images segmentation. HCT-Unet introduces a tunable hybrid paradigm that differs significantly from conventional hybrid architectures. It deploys powerful CNN to capture short-range information and Transformer to extract long-range information at each stage. Furthermore, we have designed a multi-functional multi-scale fusion bridge, which progressively integrates information from different scales and dynamically modifies attention weights for both local and global features. With the benefits of these two innovative designs, HCT-Unet demonstrates robust discriminative dependency and representation capabilities in multi-target medical image tasks. Experimental results reveal the remarkable performance of our approach in medical image segmentation tasks. Specifically, in multi-organ segmentation tasks, HCT-Unet achieved a Dice similarity coefficient (DSC) of 82.23%. Furthermore, in cardiac segmentation tasks, it reached a DSC of 91%, significantly outperforming previous state-of-the-art networks. The code has been released on Zenodo: https://zenodo.org/doi/10.5281/zenodo.11070837.
引用
收藏
页码:3457 / 3472
页数:16
相关论文
共 45 条
[1]   SThy-Net: a feature fusion-enhanced dense-branched modules network for small thyroid nodule classification from ultrasound images [J].
Al-Jebrni, Abdulrhman H. ;
Ali, Saba Ghazanfar ;
Li, Huating ;
Lin, Xiao ;
Li, Ping ;
Jung, Younhyun ;
Kim, Jinman ;
Feng, David Dagan ;
Sheng, Bin ;
Jiang, Lixin ;
Du, Jing .
VISUAL COMPUTER, 2023, 39 (08) :3675-3689
[2]  
[Anonymous], WEIGHT RES UNET HIGH
[3]   TransNorm: Transformer Provides a Strong Spatial Normalization Mechanism for a Deep Segmentation Model [J].
Azad, Reza ;
Al-Antary, Mohammad T. ;
Heidari, Moein ;
Merhof, Dorit .
IEEE ACCESS, 2022, 10 :108205-108215
[4]  
Cao H., 2021, arXiv
[5]   Modified GAN-CAED to Minimize Risk of Unintentional Liver Major Vessels Cutting by Controlled Segmentation Using CTA/SPET-CT [J].
Cheema, Muhammad Nadeem ;
Nazir, Anam ;
Yang, Po ;
Sheng, Bin ;
Li, Ping ;
Li, Huating ;
Wei, Xiaoer ;
Qin, Jing ;
Kim, Jinman ;
Feng, David Dagan .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2021, 17 (12) :7991-8002
[6]  
Chen J., 2021, PREPRINT
[7]   Xception: Deep Learning with Depthwise Separable Convolutions [J].
Chollet, Francois .
30TH IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2017), 2017, :1800-1807
[8]   Deformable Convolutional Networks [J].
Dai, Jifeng ;
Qi, Haozhi ;
Xiong, Yuwen ;
Li, Yi ;
Zhang, Guodong ;
Hu, Han ;
Wei, Yichen .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :764-773
[9]  
Dosovitskiy A, 2021, Arxiv, DOI arXiv:2010.11929
[10]   LeViT: a Vision Transformer in ConvNet's Clothing for Faster Inference [J].
Graham, Ben ;
El-Nouby, Alaaeldin ;
Touvron, Hugo ;
Stock, Pierre ;
Joulin, Armand ;
Jegou, Herve ;
Douze, Matthijs .
2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV 2021), 2021, :12239-12249