MTC-Net: Multi-scale feature fusion network for medical image segmentation

被引:0
作者
Ren S. [1 ]
Wang Y. [1 ]
机构
[1] Department of Computer Science and Technology, Shandong University of Science and Technology, Qingdao
基金
中国国家自然科学基金;
关键词
deep learning; detail enhancement; feature fusion; Medical image segmentation; multi-scale features;
D O I
10.3233/JIFS-237963
中图分类号
学科分类号
摘要
Image segmentation is critical in medical image processing for lesion detection, localisation, and subsequent diagnosis. Currently, computer-aided diagnosis (CAD) has played a significant role in improving diagnostic efficiency and accuracy. The segmentation task is made more difficult by the hazy lesion boundaries and uneven forms. Because standard convolutional neural networks (CNNs) are incapable of capturing global contextual information, adequate segmentation results are impossible to achieve. We propose a multiscale feature fusion network (MTC-Net) in this paper that integrates deep separable convolution and self-attentive modules in the encoder to achieve better local continuity of images and feature maps. In the decoder, a multi-branch multi-scale feature fusion module (MSFB) is utilized to improve the network's feature extraction capability, and it is integrated with a global cooperative aggregation module (GCAM) to learn more contextual information and adaptively fuse multi-scale features. To develop rich hierarchical representations of irregular forms, the suggested detail enhancement module (DEM) adaptively integrates local characteristics with their global dependencies. To validate the effectiveness of the proposed network, we conducted extensive experiments, evaluated on the public datasets of skin, breast, thyroid and gastrointestinal tract with ISIC2018, BUSI, TN3K and Kvasir-SEG. The comparison with the latest methods also verifies the superiority of our proposed MTC-Net in terms of accuracy. © 2024 - IOS Press. All rights reserved.
引用
收藏
页码:8729 / 8740
页数:11
相关论文
共 38 条
[1]  
Long J., Shelhamer E., Darrell T., Fully convolutional networks for semantic segmentation, Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 3431-3440, (2015)
[2]  
Ronneberger O., Fischer P., Brox T., U-net: Convolutional networks for biomedical image segmentation, International Conference on Medical imagecomputingandcomputer-assistedintervention, pp. 234-241, (2015)
[3]  
Zhou Z., Siddiquee M.M.R., Tajbakhsh N., Liang J., Unet++: Redesigning skip connections to exploit multiscale features in image segmentation, IEEE Transactions on Medical Imaging, 39, 6, pp. 1856-1867, (2019)
[4]  
Jha D., Riegler M.A., Johansen D., Halvorsen P., Johansen H.D., Doubleu-net: A deep convolutional neural network for medical image segmentation, 2020 IEEE 33rd International symposium on computer-based medical systems (CBMS), pp. 558-564, (2020)
[5]  
Zhang Z., Liu Q., Wang Y., Road extraction by deep residual U-net, IEEE Geoscience and Remote Sensing Letters, 15, 5, pp. 749-753, (2018)
[6]  
He K., Zhang X., Ren S., Sun J., Deep residual learning for image recognition, Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, (2016)
[7]  
Zeng Z., Fan C., Xiao L., Qu X., Dea-Unet: a dense-edge-attention Unet architecture for medical image segmentation, Journal of Electronic Imaging, 31, 4, (2022)
[8]  
Li B., Wu F., Liu S., Tang J., Li G.H., Zhong M., Guan X., CA-Unet++: An improved structure for medical CT scanning based on the Unet++ architecture, International Journal of Intelligent Systems, 37, 11, pp. 8814-8832, (2022)
[9]  
Wu Z., Zhao L., Zhang H., Mr-Unet commodity semantic segmentation based on transfer learning, IEEE Access, 9, pp. 159447-159456, (2021)
[10]  
Qiao Z., Du C., Rad-Unet: a residual, attention-based, dense Unet for CT sparse reconstruction, Journal of Digital Imaging, pp. 1-11, (2022)