Attention-based Fusion Network for Breast Cancer Segmentation and Classification Using Multi-modal Ultrasound Images

被引:1
作者
Cho, Yoonjae [1 ,2 ,3 ]
Misra, Sampa [1 ,2 ,3 ]
Managuli, Ravi [4 ]
Barr, Richard G. [5 ]
Lee, Jeongmin [6 ,7 ]
Kim, Chulhong [1 ,2 ,3 ,8 ]
机构
[1] Pohang Univ Sci & Technol, Med Device Innovat Ctr, Mech Engn, Convergence IT Engn,Dept Elect Engn, Pohang 37673, South Korea
[2] Pohang Univ Sci & Technol, Grad Sch Artificial Intelligence, Pohang 37673, South Korea
[3] Pohang Univ Sci & Technol, Med Device Innovat Ctr, Pohang, South Korea
[4] Univ Washington, Dept Bioengn, Seattle, WA USA
[5] Southwoods Imaging, Youngstown, OH USA
[6] Sungkyunkwan Univ, Sch Med, Dept Radiol, Seoul, South Korea
[7] Sungkyunkwan Univ, Ctr Imaging Sci, Samsung Med Ctr, Sch Med, Seoul, South Korea
[8] Opticho Inc, Pohang, South Korea
基金
新加坡国家研究基金会;
关键词
Breast cancer; Breast ultrasound images; Multi-modality; Classification; Segmentation; Transfer learning; BENIGN;
D O I
10.1016/j.ultrasmedbio.2024.11.020
中图分类号
O42 [声学];
学科分类号
070206 ; 082403 ;
摘要
Objective: Breast cancer is one of the most commonly occurring cancers in women. Thus, early detection and treatment of cancer lead to a better outcome for the patient. Ultrasound (US) imaging plays a crucial role in the early detection of breast cancer, providing a cost-effective, convenient, and safe diagnostic approach. To date, much research has been conducted to facilitate reliable and effective early diagnosis of breast cancer through US image analysis. Recently, with the introduction of machine learning technologies such as deep learning (DL), automated lesion segmentation and classification, the identification of malignant masses in US breasts has progressed, and computer-aided diagnosis (CAD) technology is being applied in clinics effectively. Herein, we propose a novel deep learning-based "segmentation + classification" model based on B- and SE-mode images. Methods: For the segmentation task, we propose a Multi-Modal Fusion U-Net (MMF-U-Net), which segments lesions by mixing B- and SE-mode information through fusion blocks. After segmenting, the lesion area from the B- and SE-mode images is cropped using a predicted segmentation mask. The encoder part of the pre-trained MMF-U-Net model is then used on the cropped B- and SE-mode breast US images to classify benign and malignant lesions. Results: The experimental results using the proposed method showed good segmentation and classification scores. The dice score, intersection over union (IoU), precision, and recall are 78.23%, 68.60%, 82.21%, and 80.58%, respectively, using the proposed MMF-U-Net on real-world clinical data. The classification accuracy is 98.46%. Conclusion: Our results show that the proposed method will effectively segment the breast lesion area and can reliably classify the benign from malignant lesions.
引用
收藏
页码:568 / 577
页数:10
相关论文
共 48 条
  • [1] Dataset of breast ultrasound images
    Al-Dhabyani, Walid
    Gomaa, Mohammed
    Khaled, Hussien
    Fahmy, Aly
    [J]. DATA IN BRIEF, 2020, 28
  • [2] Shear-wave Elastography Improves the Specificity of Breast US: The BE1 Multinational Study of 939 Masses
    Berg, Wendie A.
    Cosgrove, David O.
    Dore, Caroline J.
    Schaefer, Fritz K. W.
    Svensson, William E.
    Hooley, Regina J.
    Ohlinger, Ralf
    Mendelson, Ellen B.
    Balu-Maestro, Catherine
    Locatelli, Martina
    Tourasse, Christophe
    Cavanaugh, Barbara C.
    Juhan, Valerie
    Stavros, A. Thomas
    Tardivon, Anne
    Gay, Joel
    Henry, Jean-Pierre
    Cohen-Bacrie, Claude
    [J]. RADIOLOGY, 2012, 262 (02) : 435 - 449
  • [3] Clinical application of shear wave elastography (SWE) in the diagnosis of benign and malignant breast diseases
    Chang, Jung Min
    Moon, Woo Kyung
    Cho, Nariya
    Yi, Ann
    Koo, Hye Ryoung
    Han, Wonsik
    Noh, Dong-Young
    Moon, Hyeong-Gon
    Kim, Seung Ja
    [J]. BREAST CANCER RESEARCH AND TREATMENT, 2011, 129 (01) : 89 - 97
  • [4] Automated breast cancer detection and classification using ultrasound images: A survey
    Cheng, H. D.
    Shan, Juan
    Ju, Wen
    Guo, Yanhui
    Zhang, Ling
    [J]. PATTERN RECOGNITION, 2010, 43 (01) : 299 - 317
  • [5] Deep Learning Enhances Multiparametric Dynamic Volumetric Photoacoustic Computed Tomography In Vivo (DL-PACT)
    Choi, Seongwook
    Yang, Jinge
    Lee, Soo Young
    Kim, Jiwoong
    Lee, Jihye
    Kim, Won Jong
    Lee, Seungchul
    Kim, Chulhong
    [J]. ADVANCED SCIENCE, 2023, 10 (01)
  • [6] A Multi-Task Learning Framework for Automated Segmentation and Classification of Breast Tumors From Ultrasound Images
    Chowdary, Jignesh
    Yogarajah, Pratheepan
    Chaurasia, Priyanka
    Guruviah, Velmathi
    [J]. ULTRASONIC IMAGING, 2022, 44 (01) : 3 - 12
  • [7] Detection of lines and boundaries in speckle images - Application to medical ultrasound
    Czerwinski, RN
    Jones, DL
    O'Brien, WD
    [J]. IEEE TRANSACTIONS ON MEDICAL IMAGING, 1999, 18 (02) : 126 - 136
  • [8] Ultrasound radiomics models based on multimodal imaging feature fusion of papillary thyroid carcinoma for predicting central lymph node metastasis
    Dai, Quan
    Tao, Yi
    Liu, Dongmei
    Zhao, Chen
    Sui, Dong
    Xu, Jinshun
    Shi, Tiefeng
    Leng, Xiaoping
    Lu, Man
    [J]. FRONTIERS IN ONCOLOGY, 2023, 13
  • [9] Breast Tumor Classification in Ultrasound Images Using Combined Deep and Handcrafted Features
    Daoud, Mohammad, I
    Abdel-Rahman, Samir
    Bdair, Tariq M.
    Al-Najar, Mahasen S.
    Al-Hawari, Feras H.
    Alazrai, Rami
    [J]. SENSORS, 2020, 20 (23) : 1 - 20
  • [10] Dolz J., 2018, INT WORKSH CHALL COM, P130, DOI [DOI 10.1007/978-3-030-13736-6_11, DOI 10.1007/978-3-030-13736-611]