Towards a safe and efficient clinical implementation of machine learning in radiation oncology by exploring model interpretability, explainability and data-model dependency

被引:34
作者
Barragan-Montero, Ana [1 ]
Bibal, Adrien [2 ]
Dastarac, Margerie Huet [1 ]
Draguet, Camille [1 ,3 ]
Valdes, Gilmer [4 ]
Nguyen, Dan [5 ]
Willems, Siri [6 ]
Vandewinckele, Liesbeth [3 ]
Holmstrom, Mats [7 ]
Lofman, Fredrik [7 ]
Souris, Kevin [1 ]
Sterpin, Edmond [1 ,3 ]
Lee, John A. [1 ]
机构
[1] UCLouvain, Inst Rech Expt & Clin IREC, Radiat & Oncol MIRO Lab, Mol Imaging, Louvain, Belgium
[2] UCLouvain, PReCISE, Fac Comp Sci, UNamur & CENTAL,NaDI Inst, Louvain, Belgium
[3] Katholieke Univ Leuven, Dept Oncol, Lab Expt Radiotherapy, Leuven, Belgium
[4] Univ Calif San Francisco, Dept Radiat Oncol, Dept Epidemiol & Biostat, San Francisco, CA USA
[5] UT Southwestern Med Ctr, Dept Radiat Oncol, Med Artificial Intelligence & Automat MAIA Lab, Dallas, TX USA
[6] UZ Leuven, KU Leuven Belgium MIRC, ESAT PSI, Leuven, Belgium
[7] RaySearch Labs AB, Uppsala, Sweden
基金
美国国家卫生研究院;
关键词
machine learning; interpretability and explainability; uncertainty quantification; clinical implementation; radiation oncology; MEDICAL IMAGE SEGMENTATION; DEEP NEURAL-NETWORKS; CELL LUNG-CANCER; ARTIFICIAL-INTELLIGENCE; DOSE PREDICTION; NCIC CTG; UNCERTAINTY QUANTIFICATION; INTEROBSERVER VARIABILITY; TREATMENT RESPONSE; DOMAIN-KNOWLEDGE;
D O I
10.1088/1361-6560/ac678a
中图分类号
R318 [生物医学工程];
学科分类号
0831 ;
摘要
The interest in machine learning (ML) has grown tremendously in recent years, partly due to the performance leap that occurred with new techniques of deep learning, convolutional neural networks for images, increased computational power, and wider availability of large datasets. Most fields of medicine follow that popular trend and, notably, radiation oncology is one of those that are at the forefront, with already a long tradition in using digital images and fully computerized workflows. ML models are driven by data, and in contrast with many statistical or physical models, they can be very large and complex, with countless generic parameters. This inevitably raises two questions, namely, the tight dependence between the models and the datasets that feed them, and the interpretability of the models, which scales with its complexity. Any problems in the data used to train the model will be later reflected in their performance. This, together with the low interpretability of ML models, makes their implementation into the clinical workflow particularly difficult. Building tools for risk assessment and quality assurance of ML models must involve then two main points: interpretability and data-model dependency. After a joint introduction of both radiation oncology and ML, this paper reviews the main risks and current solutions when applying the latter to workflows in the former. Risks associated with data and models, as well as their interaction, are detailed. Next, the core concepts of interpretability, explainability, and data-model dependency are formally defined and illustrated with examples. Afterwards, a broad discussion goes through key applications of ML in workflows of radiation oncology as well as vendors' perspectives for the clinical implementation of ML.
引用
收藏
页数:46
相关论文
共 377 条
  • [1] A review of uncertainty quantification in deep learning: Techniques, applications and challenges
    Abdar, Moloud
    Pourpanah, Farhad
    Hussain, Sadiq
    Rezazadegan, Dana
    Liu, Li
    Ghavamzadeh, Mohammad
    Fieguth, Paul
    Cao, Xiaochun
    Khosravi, Abbas
    Acharya, U. Rajendra
    Makarenkov, Vladimir
    Nahavandi, Saeid
    [J]. INFORMATION FUSION, 2021, 76 : 243 - 297
  • [2] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [3] From Handcrafted to Deep-Learning-Based Cancer Radiomics Challenges and opportunities
    Afshar, Parnian
    Mohammadi, Arash
    Plataniotis, Konstantinos N.
    Oikonomou, Anastasia
    Benali, Habib
    [J]. IEEE SIGNAL PROCESSING MAGAZINE, 2019, 36 (04) : 132 - 160
  • [4] A survey on deep learning in medical image reconstruction
    Ahishakiye, Emmanuel
    Van Gijzen, Martin Bastiaan
    Tumwiine, Julius
    Wario, Ruth
    Obungoloch, Johnes
    [J]. INTELLIGENT MEDICINE, 2021, 1 (03): : 118 - 127
  • [5] Deep learning method for prediction of patient-specific dose distribution in breast cancer
    Ahn, Sang Hee
    Kim, EunSook
    Kim, Chankyu
    Cheon, Wonjoong
    Kim, Myeongsoo
    Lee, Se Byeong
    Lim, Young Kyung
    Kim, Haksoo
    Shin, Dongho
    Kim, Dae Yong
    Jeong, Jong Hwi
    [J]. RADIATION ONCOLOGY, 2021, 16 (01)
  • [6] Al-Shedivat M., 2017, ARXIV CSLG
  • [7] The Application of Unsupervised Clustering Methods to Alzheimer's Disease
    Alashwal, Hany
    El Halaby, Mohamed
    Crouse, Jacob J.
    Abdalla, Areeg
    Moustafa, Ahmed A.
    [J]. FRONTIERS IN COMPUTATIONAL NEUROSCIENCE, 2019, 13
  • [8] TEXTURAL FEATURES CORRESPONDING TO TEXTURAL PROPERTIES
    AMADASUN, M
    KING, R
    [J]. IEEE TRANSACTIONS ON SYSTEMS MAN AND CYBERNETICS, 1989, 19 (05): : 1264 - 1274
  • [9] Inter-observer variability in target delineation increases during adaptive treatment of head-and-neck and lung cancer
    Apolle, Rudi
    Appold, Steffen
    Bijl, Henk P.
    Blanchard, Pierre
    Bussink, Johan
    Faivre-Finn, Corinne
    Khalifa, Jonathan
    Laprie, Anne
    Lievens, Yolande
    Madani, Indira
    Ruffier, Amandine
    de Ruysscher, Dirk
    van Elmpt, Wouter
    Troost, Esther G. C.
    [J]. ACTA ONCOLOGICA, 2019, 58 (10) : 1378 - 1385
  • [10] Ayhan M.S., 2018, 1 C MEDICAL IMAGING