Explainable artificial intelligence in breast cancer detection and risk prediction: A systematic scoping review

被引:8
作者
Ghasemi, Amirehsan [1 ,2 ]
Hashtarkhani, Soheil [1 ]
Schwartz, David L. [3 ]
Shaban-Nejad, Arash [1 ,2 ]
机构
[1] Univ Tennessee, Coll Med, Ctr Biomed Informat, Dept Pediat,Hlth Sci Ctr, Memphis, TN 38103 USA
[2] Univ Tennessee, Bredesen Ctr Interdisciplinary Res & Grad Educ, Knoxville, TN 38103 USA
[3] Univ Tennessee, Coll Med, Hlth Sci Ctr, Dept Radiat Oncol, Memphis, TN 38103 USA
来源
CANCER INNOVATION | 2024年 / 3卷 / 05期
关键词
breast cancer; deep learning; explainable artificial intelligence; interpretable AI; machine learning; XAI; NEURAL-NETWORKS; AI; CLASSIFICATION; TRUST; EXPLANATIONS;
D O I
10.1002/cai2.136
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
With the advances in artificial intelligence (AI), data-driven algorithms are becoming increasingly popular in the medical domain. However, due to the nonlinear and complex behavior of many of these algorithms, decision-making by such algorithms is not trustworthy for clinicians and is considered a black-box process. Hence, the scientific community has introduced explainable artificial intelligence (XAI) to remedy the problem. This systematic scoping review investigates the application of XAI in breast cancer detection and risk prediction. We conducted a comprehensive search on Scopus, IEEE Explore, PubMed, and Google Scholar (first 50 citations) using a systematic search strategy. The search spanned from January 2017 to July 2023, focusing on peer-reviewed studies implementing XAI methods in breast cancer datasets. Thirty studies met our inclusion criteria and were included in the analysis. The results revealed that SHapley Additive exPlanations (SHAP) is the top model-agnostic XAI technique in breast cancer research in terms of usage, explaining the model prediction results, diagnosis and classification of biomarkers, and prognosis and survival analysis. Additionally, the SHAP model primarily explained tree-based ensemble machine learning models. The most common reason is that SHAP is model agnostic, which makes it both popular and useful for explaining any model prediction. Additionally, it is relatively easy to implement effectively and completely suits performant models, such as tree-based models. Explainable AI improves the transparency, interpretability, fairness, and trustworthiness of AI-enabled health systems and medical devices and, ultimately, the quality of care and outcomes. This systematic review is carried out using the preferred reporting items on systematic reviews and meta-analysis (PRISMA) guideline in three steps as follows: Identifying studies (total of 193 studies were initially included). Selecting the studies (total of 30 articles met the inclusion criteria for our comprehensive review). Data extraction and summarization (A data extraction form was developed in Google Sheets, consisting of eight variables, including Authors, Year, the aim of the study (Objective), Data set(s), Data type, Important features, type of artificial intelligence (machine learning or deep learning), and the Explained model. Two reviewers (Amirehsan Ghasemi and Soheil Hashtarkhani) extracted data from all included articles, and any disagreement was resolved by consensus. image
引用
收藏
页数:22
相关论文
共 50 条
[21]   Application of Artificial Intelligence Techniques to Predict Risk of Recurrence of Breast Cancer: A Systematic Review [J].
Mazo, Claudia ;
Aura, Claudia ;
Rahman, Arman ;
Gallagher, William M. ;
Mooney, Catherine .
JOURNAL OF PERSONALIZED MEDICINE, 2022, 12 (09)
[22]   An Explainable Artificial Intelligence Model for the Classification of Breast Cancer [J].
Khater, Tarek ;
Hussain, Abir ;
Bendardaf, Riyad ;
Talaat, Iman M. ;
Tawfik, Hissam ;
Ansari, Sam ;
Mahmoud, Soliman .
IEEE ACCESS, 2025, 13 :5618-5633
[23]   Proposed Comprehensive Methodology Integrated with Explainable Artificial Intelligence for Prediction of Possible Biomarkers in Metabolomics Panel of Plasma Samples for Breast Cancer Detection [J].
Colak, Cemil ;
Yagin, Fatma Hilal ;
Algarni, Abdulmohsen ;
Algarni, Ali ;
Al-Hashem, Fahaid ;
Ardigo, Luca Paolo .
MEDICINA-LITHUANIA, 2025, 61 (04)
[24]   Explainable artificial intelligence: a comprehensive review [J].
Dang Minh ;
H. Xiang Wang ;
Y. Fen Li ;
Tan N. Nguyen .
Artificial Intelligence Review, 2022, 55 :3503-3568
[25]   Artificial intelligence in mammographic phenotyping of breast cancer risk: a narrative review [J].
Gastounioti, Aimilia ;
Desai, Shyam ;
Ahluwalia, Vinayak S. ;
Conant, Emily F. ;
Kontos, Despina .
BREAST CANCER RESEARCH, 2022, 24 (01)
[26]   A Literature Review on Applications of Explainable Artificial Intelligence (XAI) [J].
Kalasampath, Khushi ;
Spoorthi, K. N. ;
Sajeev, Sreeparvathy ;
Kuppa, Sahil Sarma ;
Ajay, Kavya ;
Maruthamuthu, Angulakshmi .
IEEE ACCESS, 2025, 13 :41111-41140
[27]   A Systematic Review of Artificial Intelligence Techniques in Cancer Prediction and Diagnosis [J].
Kumar, Yogesh ;
Gupta, Surbhi ;
Singla, Ruchi ;
Hu, Yu-Chen .
ARCHIVES OF COMPUTATIONAL METHODS IN ENGINEERING, 2022, 29 (04) :2043-2070
[28]   Explainable Artificial Intelligence in Radiotherapy: A Systematic review [J].
Heising, Luca M. ;
Wolfs, Cecile J. A. ;
Jacobs, Maria J. A. ;
Verhaegen, Frank ;
Ou, Carol X. J. .
RADIOTHERAPY AND ONCOLOGY, 2024, 194 :S4444-S4446
[29]   Comparison of Explainable Artificial Intelligence Model and Radiologist Review Performances to Detect Breast Cancer in 752 Patients [J].
Oztekin, Pelin Seher ;
Katar, Oguzhan ;
Omma, Tulay ;
Erel, Serap ;
Tokur, Oguzhan ;
Avci, Derya ;
Aydogan, Murat ;
Yildirim, Ozal ;
Avci, Engin ;
Acharya, U. Rajendra .
JOURNAL OF ULTRASOUND IN MEDICINE, 2024, 43 (11) :2051-2068
[30]   An explainable artificial intelligence framework for risk prediction of COPD in smokers [J].
Wang, Xuchun ;
Qiao, Yuchao ;
Cui, Yu ;
Ren, Hao ;
Zhao, Ying ;
Linghu, Liqin ;
Ren, Jiahui ;
Zhao, Zhiyang ;
Chen, Limin ;
Qiu, Lixia .
BMC PUBLIC HEALTH, 2023, 23 (01)