A comparative study of pre-trained convolutional neural networks for semantic segmentation of breast tumors in ultrasound

被引:57
|
作者
Gomez-Flores, Wilfrido [1 ]
de Albuquerque Pereira, Wagner Coelho [2 ]
机构
[1] Inst Politecn Nacl, Unidad Tamaulipas, Ctr Invest & Estudios Avanzados, Ciudad Victoria, Tamaulipas, Mexico
[2] Univ Fed Rio de Janeiro, Programa Engn Biomed COPPE, Rio De Janeiro, Brazil
关键词
Breast ultrasound; Breast tumors; Convolutional neural networks; Semantic segmentation; Transfer learning; COMPUTER-AIDED DIAGNOSIS; CLASSIFICATION; LESIONS; AGE;
D O I
10.1016/j.compbiomed.2020.104036
中图分类号
Q [生物科学];
学科分类号
07 ; 0710 ; 09 ;
摘要
The automatic segmentation of breast tumors in ultrasound (BUS) has recently been addressed using convolutional neural networks (CNN). These CNN-based approaches generally modify a previously proposed CNN architecture or they design a new architecture using CNN ensembles. Although these methods have reported satisfactory results, the trained CNN architectures are often unavailable for reproducibility purposes. Moreover, these methods commonly learn from small BUS datasets with particular properties, which limits generalization in new cases. This paper evaluates four public CNN-based semantic segmentation models that were developed by the computer vision community, as follows: (1) Fully Convolutional Network (FCN) with AlexNet network, (2) U-Net network, (3) SegNet using VGG16 and VGG19 networks, and (4) DeepLabV3+ using ResNetl8, ResNet50, MobileNet-V2, and Xception networks. By transfer learning, these CNNs are fine-tuned to segment BUS images in normal and tumoral pixels. The goal is to select a potential CNN-based segmentation model to be further used in computer-aided diagnosis (CAD) systems. The main significance of this study is the comparison of eight well-established CNN architectures using a more extensive BUS dataset than those used by approaches that are currently found in the literature. More than 3000 BUS images acquired from seven US machine models are used for training and validation. The F1-score (F1s) and the Intersection over Union (IoU) quantify the segmentation performance. The segmentation models based on SegNet and DeepLabV3+ obtain the best results with F1s > 0.90 and IoU > 0.81. In the case of U-Net, the segmentation performance is F1s = 0.89 and IoU = 0.80, whereas FCN-AlexNet attains the lowest results with F1s = 0.84 and IoU = 0.73. In particular, ResNetl8 obtains F1s = 0.905 and IoU = 0.827 and requires less training time among SegNet and DeepLabV3+ networks. Hence, ResNetl8 is a potential candidate for implementing fully automated end-to-end CAD systems. The CNN models generated in this study are available to researchers at https://github.com/wgomezf/CNN-BUS-segment, which attempts to impact the fair comparison with other CNN-based segmentation approaches for BUS images.
引用
收藏
页数:10
相关论文
共 50 条
  • [31] Breast Ultrasound Image Classification and Segmentation Using Convolutional Neural Networks
    Xie, Xiaozheng
    Shi, Faqiang
    Niu, Jianwei
    Tang, Xiaolan
    ADVANCES IN MULTIMEDIA INFORMATION PROCESSING, PT III, 2018, 11166 : 200 - 211
  • [32] MaskDiffusion: Exploiting Pre-Trained Diffusion Models for Semantic Segmentation
    Kawano, Yasufumi
    Aoki, Yoshimitsu
    IEEE ACCESS, 2024, 12 : 127283 - 127293
  • [33] DOMAIN ADAPTATION FOR SEMANTIC SEGMENTATION USING CONVOLUTIONAL NEURAL NETWORKS
    Schenkel, Fabian
    Middelmann, Wolfgang
    2019 IEEE INTERNATIONAL GEOSCIENCE AND REMOTE SENSING SYMPOSIUM (IGARSS 2019), 2019, : 728 - 731
  • [34] An Approach of Transferring Pre-trained Deep Convolutional Neural Networks for Aerial Scene Classification
    Devi, Nilakshi
    Borah, Bhogeswar
    PATTERN RECOGNITION AND MACHINE INTELLIGENCE, PREMI 2019, PT I, 2019, 11941 : 551 - 558
  • [35] Prognostic analysis of histopathological images using pre-trained convolutional neural networks: application to hepatocellular carcinoma
    Lu, Liangqun
    Daigle, Bernie J., Jr.
    PEERJ, 2020, 8
  • [36] Kernel pooling feature representation of pre-trained convolutional neural networks for leaf recognition
    Shu Feng
    Multimedia Tools and Applications, 2022, 81 : 4255 - 4282
  • [37] Multi-Classification of Brain Tumors on Magnetic Resonance Images Using an Ensemble of Pre-Trained Convolutional Neural Networks
    Wu, Miao
    Liu, Qian
    Yan, Chuanbo
    Sen, Gan
    CURRENT MEDICAL IMAGING, 2023, 19 (01) : 65 - 76
  • [38] Application of Pre-Trained Deep Convolutional Neural Networks for Coffee Beans Species Detection
    Yavuz Unal
    Yavuz Selim Taspinar
    Ilkay Cinar
    Ramazan Kursun
    Murat Koklu
    Food Analytical Methods, 2022, 15 : 3232 - 3243
  • [39] Exploiting Pre-Trained Convolutional Neural Networks for the Detection of Nutrient Deficiencies in Hydroponic Basil
    Gul, Zeki
    Bora, Sebnem
    SENSORS, 2023, 23 (12)
  • [40] An Efficient Method for Breast Mass Classification Using Pre-Trained Deep Convolutional Networks
    Al-Mansour, Ebtihal
    Hussain, Muhammad
    Aboalsamh, Hatim A.
    Fazal-e-Amin
    MATHEMATICS, 2022, 10 (14)