Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review

被引:33
|
作者
Ladbury, Colton [1 ]
Zarinshenas, Reza [1 ]
Semwal, Hemal [2 ,3 ]
Tam, Andrew [1 ]
Vaidehi, Nagarajan [4 ]
Rodin, Andrei S. [4 ]
Liu, An [1 ]
Glaser, Scott [1 ]
Salgia, Ravi [5 ]
Amini, Arya [1 ,6 ]
机构
[1] City Hope Natl Med Ctr, Dept Radiat Oncol, Duarte, CA USA
[2] Univ Calif Los Angeles, Dept Bioengn, Los Angeles, CA USA
[3] Univ Calif Los Angeles, Dept Integrated Biol & Physiol, Los Angeles, CA USA
[4] City Hope Natl Med Ctr, Dept Computat & Quantitat Med, Duarte, CA USA
[5] City Hope Natl Med Ctr, Dept Med Oncol, Duarte, CA USA
[6] City Hope Natl Med Ctr, Dept Radiat Oncol, 1500 Duarte Rd, Duarte, CA 91010 USA
关键词
Explainable artificial intelligence (XAI); Local Interpretable Model-agnostic Explanations (LIME); machine learning (ML); SHapley Additive exPlanations (SHAP); MACHINE LEARNING-MODELS; OPEN-LABEL; CANCER; RISK; RADIOTHERAPY; RADIOMICS; RADIATION; DIAGNOSIS; SYSTEM;
D O I
10.21037/tcr-22-1626
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Background and Objective: Machine learning (ML) models are increasingly being utilized in oncology research for use in the clinic. However, while more complicated models may provide improvements in predictive or prognostic power, a hurdle to their adoption are limits of model interpretability, wherein the inner workings can be perceived as a "black box". Explainable artificial intelligence (XAI) frameworks including Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are novel, model-agnostic approaches that aim to provide insight into the inner workings of the "black box" by producing quantitative visualizations of how model predictions are calculated. In doing so, XAI can transform complicated ML models into easily understandable charts and interpretable sets of rules, which can give providers with an intuitive understanding of the knowledge generated, thus facilitating the deployment of such models in routine clinical workflows.Methods: We performed a comprehensive, non-systematic review of the latest literature to define use cases of model-agnostic XAI frameworks in oncologic research. The examined database was PubMed/MEDLINE. The last search was run on May 1, 2022.Key Content and Findings: In this review, we identified several fields in oncology research where ML models and XAI were utilized to improve interpretability, including prognostication, diagnosis, radiomics, pathology, treatment selection, radiation treatment workflows, and epidemiology. Within these fields, XAI facilitates determination of feature importance in the overall model, visualization of relationships and/ or interactions, evaluation of how individual predictions are produced, feature selection, identification of prognostic and/or predictive thresholds, and overall confidence in the models, among other benefits. These examples provide a basis for future work to expand on, which can facilitate adoption in the clinic when the complexity of such modeling would otherwise be prohibitive.Conclusions: Model-agnostic XAI frameworks offer an intuitive and effective means of describing oncology ML models, with applications including prognostication and determination of optimal treatment regimens. Using such frameworks presents an opportunity to improve understanding of ML models, which is a critical step to their adoption in the clinic.
引用
收藏
页码:3853 / 3868
页数:16
相关论文
共 50 条
  • [31] A Review of Trustworthy and Explainable Artificial Intelligence (XAI)
    Chamola, Vinay
    Hassija, Vikas
    Sulthana, A. Razia
    Ghosh, Debshishu
    Dhingra, Divyansh
    Sikdar, Biplab
    IEEE ACCESS, 2023, 11 : 78994 - 79015
  • [32] Explainable artificial intelligence in finance: A bibliometric review
    Chen, Xun-Qi
    Ma, Chao-Qun
    Ren, Yi-Shuai
    Lei, Yu-Tian
    Huynh, Ngoc Quang Anh
    Narayan, Seema
    FINANCE RESEARCH LETTERS, 2023, 56
  • [33] Explainable Artificial Intelligence in Radiotherapy: A Systematic review
    Heising, Luca M.
    Wolfs, Cecile J. A.
    Jacobs, Maria J. A.
    Verhaegen, Frank
    Ou, Carol X. J.
    RADIOTHERAPY AND ONCOLOGY, 2024, 194 : S4444 - S4446
  • [34] Kamodo's model-agnostic satellite flythrough: Lowering the utilization barrier for heliophysics model outputs
    Ringuette, Rebecca
    De Zeeuw, Darren
    Rastaetter, Lutz
    Pembroke, Asher
    Gerland, Oliver
    Garcia-Sage, Katherine
    FRONTIERS IN ASTRONOMY AND SPACE SCIENCES, 2022, 9
  • [35] Explainable Artificial Intelligence-A New Step towards the Trust in Medical Diagnosis with AI Frameworks: A Review
    Deshpande, Nilkanth Mukund
    Gite, Shilpa
    Pradhan, Biswajeet
    Assiri, Mazen Ebraheem
    CMES-COMPUTER MODELING IN ENGINEERING & SCIENCES, 2022, 133 (03): : 843 - 872
  • [36] Artificial intelligence and anesthesia: A narrative review
    Singh, Madhavi
    Nath, Gita
    SAUDI JOURNAL OF ANAESTHESIA, 2022, 16 (01) : 86 - 93
  • [37] Artificial Intelligence: An Innovative A Narrative Review
    Sharma, Mansi S.
    Jadhav, Vikrant
    Bhuskute, Kushal Prakash
    Reche, Amit
    JOURNAL OF CLINICAL AND DIAGNOSTIC RESEARCH, 2023, 17 (12) : ZE7 - ZE11
  • [38] Artificial intelligence and anesthesia: a narrative review
    Bellini, Valentina
    Carna, Emanuele Rafano
    Russo, Michele
    Di Vincenzo, Fabiola
    Berghenti, Matteo
    Baciarello, Marco
    Bignami, Elena
    ANNALS OF TRANSLATIONAL MEDICINE, 2022, 10 (09)
  • [39] Artificial intelligence in endodontics: A narrative review
    Sudeep, Parvathi
    Gehlot, Paras
    Murali, Brindha
    Mariswamy, Annapoorna
    JOURNAL OF INTERNATIONAL ORAL HEALTH, 2023, 15 (02): : 134 - 141
  • [40] Artificial Intelligence in Hepatology: A Narrative Review
    Vaz, Karl
    Goodwin, Thomas
    Kemp, William
    Roberts, Stuart
    Majeed, Ammar
    SEMINARS IN LIVER DISEASE, 2021, 41 (04) : 551 - 556