Explainable breast cancer molecular expression prediction using multi-task deep-learning based on 3D whole breast ultrasound

被引:0
|
作者
Huang, Zengan [1 ]
Zhang, Xin [1 ]
Ju, Yan [2 ]
Zhang, Ge [2 ]
Chang, Wanying [2 ]
Song, Hongping [2 ]
Gao, Yi [1 ]
机构
[1] Shenzhen Univ, Med Sch, Sch Biomed Engn, Shenzhen 518055, Guangdong, Peoples R China
[2] Fourth Mil Med Univ, Xijing Hosp, Dept Ultrasound, 127 Changle West Rd, Xian 710032, Peoples R China
来源
INSIGHTS INTO IMAGING | 2024年 / 15卷 / 01期
基金
中国国家自然科学基金;
关键词
Breast cancer; Deep learning; Ultrasound imaging; MAMMOGRAPHY; NETWORK;
D O I
10.1186/s13244-024-01810-9
中图分类号
R8 [特种医学]; R445 [影像诊断学];
学科分类号
1002 ; 100207 ; 1009 ;
摘要
ObjectivesTo noninvasively estimate three breast cancer biomarkers, estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor receptor 2 (HER2) and enhance performance and interpretability via multi-task deep learning.MethodsThe study included 388 breast cancer patients who received the 3D whole breast ultrasound system (3DWBUS) examinations at Xijing Hospital between October 2020 and September 2021. Two predictive models, a single-task and a multi-task, were developed; the former predicts biomarker expression, while the latter combines tumor segmentation with biomarker prediction to enhance interpretability. Performance evaluation included individual and overall prediction metrics, and Delong's test was used for performance comparison. The models' attention regions were visualized using Grad-CAM + + technology.ResultsAll patients were randomly split into a training set (n = 240, 62%), a validation set (n = 60, 15%), and a test set (n = 88, 23%). In the individual evaluation of ER, PR, and HER2 expression prediction, the single-task and multi-task models achieved respective AUCs of 0.809 and 0.735 for ER, 0.688 and 0.767 for PR, and 0.626 and 0.697 for HER2, as observed in the test set. In the overall evaluation, the multi-task model demonstrated superior performance in the test set, achieving a higher macro AUC of 0.733, in contrast to 0.708 for the single-task model. The Grad-CAM + + method revealed that the multi-task model exhibited a stronger focus on diseased tissue areas, improving the interpretability of how the model worked.ConclusionBoth models demonstrated impressive performance, with the multi-task model excelling in accuracy and offering improved interpretability on noninvasive 3DWBUS images using Grad-CAM + + technology.Critical relevance statementThe multi-task deep learning model exhibits effective prediction for breast cancer biomarkers, offering direct biomarker identification and improved clinical interpretability, potentially boosting the efficiency of targeted drug screening.Key PointsTumoral biomarkers are paramount for determining breast cancer treatment.The multi-task model can improve prediction performance, and improve interpretability in clinical practice.The 3D whole breast ultrasound system-based deep learning models excelled in predicting breast cancer biomarkers.
引用
收藏
页数:13
相关论文
共 50 条
  • [11] 3D multi-view tumor detection in automated whole breast ultrasound using deep convolutional neural network
    Zhou, Yue
    Chen, Houjin
    Li, Yanfeng
    Wang, Shu
    Cheng, Lin
    Li, Jupeng
    EXPERT SYSTEMS WITH APPLICATIONS, 2021, 168
  • [12] Multi-center study: ultrasound-based deep learning features for predicting Ki-67 expression in breast cancer
    Cen, Qishan
    Wang, Man
    Zhou, Siying
    Yang, Hong
    Wang, Ye
    SCIENTIFIC REPORTS, 2025, 15 (01):
  • [13] Multi-task deep learning for fine-grained classification and grading in breast cancer histopathological images
    Li, Lingqiao
    Pan, Xipeng
    Yang, Huihua
    Liu, Zhenbing
    He, Yubei
    Li, Zhongming
    Fan, Yongxian
    Cao, Zhiwei
    Zhang, Longhao
    MULTIMEDIA TOOLS AND APPLICATIONS, 2020, 79 (21-22) : 14509 - 14528
  • [14] Deep learning based tumor detection and segmentation for automated 3D breast ultrasound imaging
    Barkhof, Francien
    Abbring, Silvia
    Pardasani, Rohit
    Awasthi, Navchetan
    PROCEEDINGS OF THE 2024 IEEE SOUTH ASIAN ULTRASONICS SYMPOSIUM, SAUS 2024, 2024,
  • [15] 3D Breast Cancer Segmentation in DCE-MRI Using Deep Learning With Weak Annotation
    Park, Ga Eun
    Kim, Sung Hun
    Nam, Yoonho
    Kang, Junghwa
    Park, Minjeong
    Kang, Bong Joo
    JOURNAL OF MAGNETIC RESONANCE IMAGING, 2024, 59 (06) : 2252 - 2262
  • [16] Multi-task deep learning for fine-grained classification and grading in breast cancer histopathological images
    Lingqiao Li
    Xipeng Pan
    Huihua Yang
    Zhenbing Liu
    Yubei He
    Zhongming Li
    Yongxian Fan
    Zhiwei Cao
    Longhao Zhang
    Multimedia Tools and Applications, 2020, 79 : 14509 - 14528
  • [17] Risk Stratification of Lung Nodules Using 3D CNN-Based Multi-task Learning
    Hussein, Sarfaraz
    Cao, Kunlin
    Song, Qi
    Bagci, Ulas
    INFORMATION PROCESSING IN MEDICAL IMAGING (IPMI 2017), 2017, 10265 : 249 - 260
  • [18] Prediction of gene expression-based breast cancer proliferation scores from histopathology whole slide images using deep learning
    Ekholm, Andreas
    Wang, Yinxi
    Vallon-Christersson, Johan
    Boissin, Constance
    Rantalainen, Mattias
    BMC CANCER, 2024, 24 (01)
  • [19] Identification of Luminal A breast cancer by using deep learning analysis based on multi-modal images
    Liu, Menghan
    Zhang, Shuai
    Du, Yanan
    Zhang, Xiaodong
    Wang, Dawei
    Ren, Wanqing
    Sun, Jingxiang
    Yang, Shiwei
    Zhang, Guang
    FRONTIERS IN ONCOLOGY, 2023, 13
  • [20] A nomogram based on radiomics signature and deep-learning signature for preoperative prediction of axillary lymph node metastasis in breast cancer
    Wang, Dawei
    Hu, Yiqi
    Zhan, Chenao
    Zhang, Qi
    Wu, Yiping
    Ai, Tao
    FRONTIERS IN ONCOLOGY, 2022, 12