Multi-task approach based on combined CNN-transformer for efficient segmentation and classification of breast tumors in ultrasound images

被引:8
作者
Tagnamas, Jaouad [1 ]
Ramadan, Hiba [1 ]
Yahyaouy, Ali [1 ]
Tairi, Hamid [1 ]
机构
[1] Univ Sidi Mohamed Ben Abdellah, Fac Sci Dhar El Mahraz, Dept Informat, Fes 30000, Morocco
关键词
Breast ultrasound segmentation; Convolutional neural networks; Swin Transformer; ConvNeXt; Efficient channel attention; Coordinate attention module; LESIONS;
D O I
10.1186/s42492-024-00155-w
中图分类号
TP39 [计算机的应用];
学科分类号
081203 ; 0835 ;
摘要
Accurate segmentation of breast ultrasound (BUS) images is crucial for early diagnosis and treatment of breast cancer. Further, the task of segmenting lesions in BUS images continues to pose significant challenges due to the limitations of convolutional neural networks (CNNs) in capturing long-range dependencies and obtaining global context information. Existing methods relying solely on CNNs have struggled to address these issues. Recently, ConvNeXts have emerged as a promising architecture for CNNs, while transformers have demonstrated outstanding performance in diverse computer vision tasks, including the analysis of medical images. In this paper, we propose a novel breast lesion segmentation network CS-Net that combines the strengths of ConvNeXt and Swin Transformer models to enhance the performance of the U-Net architecture. Our network operates on BUS images and adopts an end-to-end approach to perform segmentation. To address the limitations of CNNs, we design a hybrid encoder that incorporates modified ConvNeXt convolutions and Swin Transformer. Furthermore, to enhance capturing the spatial and channel attention in feature maps we incorporate the Coordinate Attention Module. Second, we design an Encoder-Decoder Features Fusion Module that facilitates the fusion of low-level features from the encoder with high-level semantic features from the decoder during the image reconstruction. Experimental results demonstrate the superiority of our network over state-of-the-art image segmentation methods for BUS lesions segmentation.
引用
收藏
页数:15
相关论文
共 57 条
  • [11] Chaurasia A, 2017, 2017 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP)
  • [12] Chen J., 2021, arXiv preprint: arXiv:2102.04306
  • [13] Dosovitskiy A., 2021, INT C LEARNING REPRE
  • [14] Residual attention based uncertainty-guided mean teacher model for semi-supervised breast masses segmentation in 2D ultrasonography
    Farooq, Muhammad Umar
    Ullah, Zahid
    Gwak, Jeonghwan
    [J]. COMPUTERIZED MEDICAL IMAGING AND GRAPHICS, 2023, 104
  • [15] UTNet: A Hybrid Transformer Architecture for Medical Image Segmentation
    Gao, Yunhe
    Zhou, Mu
    Metaxas, Dimitris N.
    [J]. MEDICAL IMAGE COMPUTING AND COMPUTER ASSISTED INTERVENTION - MICCAI 2021, PT III, 2021, 12903 : 61 - 71
  • [16] Improving classification performance of breast lesions on ultrasonography
    Gomez Flores, Wilfrido
    de Albuquerque Pereira, Wagner Coelho
    Catelli Infantosi, Antonio Fernando
    [J]. PATTERN RECOGNITION, 2015, 48 (04) : 1125 - 1136
  • [17] Gómez W, 2013, 2013 10TH INTERNATIONAL CONFERENCE AND EXPO ON EMERGING TECHNOLOGIES FOR A SMARTER WORLD (CEWIT)
  • [18] A Survey on Vision Transformer
    Han, Kai
    Wang, Yunhe
    Chen, Hanting
    Chen, Xinghao
    Guo, Jianyuan
    Liu, Zhenhua
    Tang, Yehui
    Xiao, An
    Xu, Chunjing
    Xu, Yixing
    Yang, Zhaohui
    Zhang, Yiman
    Tao, Dacheng
    [J]. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (01) : 87 - 110
  • [19] A deep learning framework for supporting the classification of breast lesions in ultrasound images
    Han, Seokmin
    Kang, Ho-Kyung
    Jeong, Ja-Yeon
    Park, Moon-Ho
    Kim, Wonsik
    Bang, Won-Chul
    Seong, Yeong-Kyeong
    [J]. PHYSICS IN MEDICINE AND BIOLOGY, 2017, 62 (19) : 7714 - 7728
  • [20] Deep Residual Learning for Image Recognition
    He, Kaiming
    Zhang, Xiangyu
    Ren, Shaoqing
    Sun, Jian
    [J]. 2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, : 770 - 778