Utilization of model-agnostic explainable artificial intelligence frameworks in oncology: a narrative review

被引:33
|
作者
Ladbury, Colton [1 ]
Zarinshenas, Reza [1 ]
Semwal, Hemal [2 ,3 ]
Tam, Andrew [1 ]
Vaidehi, Nagarajan [4 ]
Rodin, Andrei S. [4 ]
Liu, An [1 ]
Glaser, Scott [1 ]
Salgia, Ravi [5 ]
Amini, Arya [1 ,6 ]
机构
[1] City Hope Natl Med Ctr, Dept Radiat Oncol, Duarte, CA USA
[2] Univ Calif Los Angeles, Dept Bioengn, Los Angeles, CA USA
[3] Univ Calif Los Angeles, Dept Integrated Biol & Physiol, Los Angeles, CA USA
[4] City Hope Natl Med Ctr, Dept Computat & Quantitat Med, Duarte, CA USA
[5] City Hope Natl Med Ctr, Dept Med Oncol, Duarte, CA USA
[6] City Hope Natl Med Ctr, Dept Radiat Oncol, 1500 Duarte Rd, Duarte, CA 91010 USA
关键词
Explainable artificial intelligence (XAI); Local Interpretable Model-agnostic Explanations (LIME); machine learning (ML); SHapley Additive exPlanations (SHAP); MACHINE LEARNING-MODELS; OPEN-LABEL; CANCER; RISK; RADIOTHERAPY; RADIOMICS; RADIATION; DIAGNOSIS; SYSTEM;
D O I
10.21037/tcr-22-1626
中图分类号
R73 [肿瘤学];
学科分类号
100214 ;
摘要
Background and Objective: Machine learning (ML) models are increasingly being utilized in oncology research for use in the clinic. However, while more complicated models may provide improvements in predictive or prognostic power, a hurdle to their adoption are limits of model interpretability, wherein the inner workings can be perceived as a "black box". Explainable artificial intelligence (XAI) frameworks including Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive exPlanations (SHAP) are novel, model-agnostic approaches that aim to provide insight into the inner workings of the "black box" by producing quantitative visualizations of how model predictions are calculated. In doing so, XAI can transform complicated ML models into easily understandable charts and interpretable sets of rules, which can give providers with an intuitive understanding of the knowledge generated, thus facilitating the deployment of such models in routine clinical workflows.Methods: We performed a comprehensive, non-systematic review of the latest literature to define use cases of model-agnostic XAI frameworks in oncologic research. The examined database was PubMed/MEDLINE. The last search was run on May 1, 2022.Key Content and Findings: In this review, we identified several fields in oncology research where ML models and XAI were utilized to improve interpretability, including prognostication, diagnosis, radiomics, pathology, treatment selection, radiation treatment workflows, and epidemiology. Within these fields, XAI facilitates determination of feature importance in the overall model, visualization of relationships and/ or interactions, evaluation of how individual predictions are produced, feature selection, identification of prognostic and/or predictive thresholds, and overall confidence in the models, among other benefits. These examples provide a basis for future work to expand on, which can facilitate adoption in the clinic when the complexity of such modeling would otherwise be prohibitive.Conclusions: Model-agnostic XAI frameworks offer an intuitive and effective means of describing oncology ML models, with applications including prognostication and determination of optimal treatment regimens. Using such frameworks presents an opportunity to improve understanding of ML models, which is a critical step to their adoption in the clinic.
引用
收藏
页码:3853 / 3868
页数:16
相关论文
共 50 条
  • [21] Challenges and opportunities to integrate artificial intelligence in radiation oncology: a narrative review
    Jeong, Chiyoung
    Goh, Young Moon
    Kwak, Jungwon
    EWHA MEDICAL JOURNAL, 2024, 47 (04):
  • [22] Explainable artificial intelligence: an analytical review
    Angelov, Plamen P.
    Soares, Eduardo A.
    Jiang, Richard
    Arnold, Nicholas I.
    Atkinson, Peter M.
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2021, 11 (05)
  • [23] Explainable artificial intelligence: a comprehensive review
    Dang Minh
    H. Xiang Wang
    Y. Fen Li
    Tan N. Nguyen
    Artificial Intelligence Review, 2022, 55 : 3503 - 3568
  • [24] Explainable artificial intelligence: a comprehensive review
    Minh, Dang
    Wang, H. Xiang
    Li, Y. Fen
    Nguyen, Tan N.
    ARTIFICIAL INTELLIGENCE REVIEW, 2022, 55 (05) : 3503 - 3568
  • [25] Audio Explainable Artificial Intelligence: A Review
    Akman, Alican
    Schuller, Bjorn W.
    INTELLIGENT COMPUTING, 2024, 2
  • [26] A review of Explainable Artificial Intelligence in healthcare
    Sadeghi, Zahra
    Alizadehsani, Roohallah
    Cifci, Mehmet Akif
    Kausar, Samina
    Rehman, Rizwan
    Mahanta, Priyakshi
    Bora, Pranjal Kumar
    Almasri, Ammar
    Alkhawaldeh, Rami S.
    Hussain, Sadiq
    Alatas, Bilal
    Shoeibi, Afshin
    Moosaei, Hossein
    Hladik, Milan
    Nahavandi, Saeid
    Pardalos, Panos M.
    COMPUTERS & ELECTRICAL ENGINEERING, 2024, 118
  • [27] Computational Evaluation of Model-Agnostic Explainable AI Using Local Feature Importance in Healthcare
    Erdeniz, Seda Polat
    Schrempf, Michael
    Kramer, Diether
    Rainer, Peter P.
    Felfernig, Alexander
    Tran, Trang
    Burgstaller, Tamim
    Lubos, Sebastian
    ARTIFICIAL INTELLIGENCE IN MEDICINE, AIME 2023, 2023, 13897 : 114 - 119
  • [28] Unveiling the Power of Model-Agnostic Multiscale Analysis for Enhancing Artificial Intelligence Models in Breast Cancer Histopathology Images
    Tsiknakis, Nikos
    Manikis, Georgios
    Tzoras, Evangelos
    Salgkamis, Dimitrios
    Vidal, Joan Martinez
    Wang, Kang
    Zaridis, Dimitris
    Sifakis, Emmanouil
    Zerdes, Ioannis
    Bergh, Jonas
    Hartman, Johan
    Acs, Balazs
    Marias, Kostas
    Foukakis, Theodoros
    IEEE JOURNAL OF BIOMEDICAL AND HEALTH INFORMATICS, 2024, 28 (09) : 5312 - 5322
  • [29] Explainable artificial intelligence for spectroscopy data: a review
    Contreras, Jhonatan
    Bocklitz, Thomas
    PFLUGERS ARCHIV-EUROPEAN JOURNAL OF PHYSIOLOGY, 2024, : 603 - 615
  • [30] Explainable Artificial Intelligence in Education: A Comprehensive Review
    Chaushi, Blerta Abazi
    Selimi, Besnik
    Chaushi, Agron
    Apostolova, Marika
    EXPLAINABLE ARTIFICIAL INTELLIGENCE, XAI 2023, PT II, 2023, 1902 : 48 - 71