Interpretable Local Concept-based Explanation with Human Feedback to Predict All-cause Mortality

被引:6
作者
Elshawi, Radwa [1 ]
Al-Mallah, Mouaz [2 ]
机构
[1] Tartu Univ, Inst Comp Sci, Tartu, Estonia
[2] Houston Methodist DeBakey Heart & Vasc Ctr Houston, Houston, TX USA
关键词
EXPLAINABLE AI;
D O I
10.8745/sio.548741
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine learning models are incorporated in different fields and disciplines in which some of them require a high level of accountability and transparency, for example, the healthcare sector. With the General Data Protection Regulation (GDPR), the importance for plausibility and verifiability of the predictions made by machine learning models has become essential. A widely used category of explanation techniques attempts to explain models' predictions by quantifying the importance score of each input feature. However, summarizing such scores to provide human-interpretable explanations is challenging. An-other category of explanation techniques focuses on learning a domain representation in terms of high-level human-understandable concepts and then utilizing them to explain pre-dictions. These explanations are hampered by how concepts are constructed, which is not intrinsically interpretable. To this end, we propose Concept-based Local Explanations with Feedback (CLEF), a novel local model agnostic explanation framework for learning a set of high-level transparent concept definitions in high-dimensional tabular data that uses clinician-labeled concepts rather than raw features. CLEF maps the raw input fea-tures to high-level intuitive concepts and then decompose the evidence of prediction of the instance being explained into concepts. In addition, the proposed framework generates counterfactual explanations, suggesting the minimum changes in the instance's concept -based explanation that will lead to a different prediction. We demonstrate with simulated user feedback on predicting the risk of mortality. Such direct feedback is more effective than other techniques, that rely on hand-labelled or automatically extracted concepts, in learning concepts that align with ground truth concept definitions.
引用
收藏
页码:833 / 855
页数:23
相关论文
共 67 条
  • [1] Act A. I, 2021, EUR-Lex-52021PC0206
  • [2] Adebayo J, 2016, Arxiv, DOI arXiv:1611.04967
  • [3] Rationale and Design of the Henry Ford ExercIse Testing Project (The FIT Project)
    Al-Mallah, Mouaz H.
    Keteyian, Steven J.
    Brawner, Clinton A.
    Whelton, Seamus
    Blaha, Michael J.
    [J]. CLINICAL CARDIOLOGY, 2014, 37 (08) : 456 - 461
  • [4] Bodria F, 2021, Arxiv, DOI [arXiv:2102.13076, DOI 10.1007/S10618-023-00933-9]
  • [5] Intelligible Models for HealthCare: Predicting Pneumonia Risk and Hospital 30-day Readmission
    Caruana, Rich
    Lou, Yin
    Gehrke, Johannes
    Koch, Paul
    Sturm, Marc
    Elhadad, Noemie
    [J]. KDD'15: PROCEEDINGS OF THE 21ST ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY AND DATA MINING, 2015, : 1721 - 1730
  • [6] Validation of Electronic Health Record Phenotyping of Bipolar Disorder Cases and Controls
    Castro, Victor M.
    Minnier, Jessica
    Murphy, Shawn N.
    Kohane, Isaac
    Churchill, Susanne E.
    Gainer, Vivian
    Cai, Tianxi
    Hoffnagle, Alison G.
    Dai, Yael
    Block, Stefanie
    Weill, Sydney R.
    Nadal-Vicens, Mireya
    Pollastri, Alisha R.
    Rosenquist, J. Niels
    Goryachev, Sergey
    Ongur, Dost
    Sklar, Pamela
    Perlis, Roy H.
    Smoller, Jordan W.
    [J]. AMERICAN JOURNAL OF PSYCHIATRY, 2015, 172 (04) : 363 - 372
  • [7] Chen IY, 2018, ADV NEUR IN, V31
  • [8] Chen Irene Y, 2019, AMA J Ethics, V21, pE167, DOI 10.1001/amajethics.2019.167
  • [9] Cui S, 2021, Arxiv, DOI arXiv:2006.08267
  • [10] Machine Learning and the Profession of Medicine
    Darcy, Alison M.
    Louie, Alan K.
    Roberts, Laura Weiss
    [J]. JAMA-JOURNAL OF THE AMERICAN MEDICAL ASSOCIATION, 2016, 315 (06): : 551 - 552