Directive Explanations for Actionable Explainability in Machine Learning Applications

被引:0
|
作者
Singh, Ronal [1 ]
Miller, Tim [1 ]
Lyons, Henrietta [1 ]
Sonenberg, Liz [1 ]
Velloso, Eduardo [1 ]
Vetere, Frank [1 ]
Howe, Piers [2 ]
Dourish, Paul [3 ]
机构
[1] Univ Melbourne, Sch Comp & Informat Syst, Melbourne, Vic 3010, Australia
[2] Univ Melbourne, Melbourne Sch Psychol Sci, Melbourne, Vic 3010, Australia
[3] Univ Calif Irvine, Donald Bren Sch Informat & Comp Sci, Irvine, CA 92697 USA
基金
澳大利亚研究理事会;
关键词
Explainable AI; directive explanations; counterfactual explanations; BLACK-BOX;
D O I
10.1145/3579363
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also explaining how an individual could obtain their desired outcome. We formally define the concept of directive explanations (those that offer specific actions an individual could take to achieve their desired outcome), introduce two forms of directive explanations (directive-specific and directive-generic), and describe how these can be generated computationally. We investigate people's preference for and perception toward directive explanations through two online studies, one quantitative and the other qualitative, each covering two domains (the credit scoring domain and the employee satisfaction domain). We find a significant preference for both forms of directive explanations compared to non-directive counterfactual explanations. However, we also find that preferences are affected by many aspects, including individual preferences and social factors. We conclude that deciding what type of explanation to provide requires information about the recipients and other contextual information. This reinforces the need for a human-centered and context-specific approach to explainable AI.
引用
收藏
页数:26
相关论文
共 50 条
  • [31] AdViCE: Aggregated Visual Counterfactual Explanations for Machine Learning Model Validation
    Gomez, Oscar
    Holter, Steffen
    Yuan, Jun
    Bertini, Enrico
    2021 IEEE VISUALIZATION CONFERENCE - SHORT PAPERS (VIS 2021), 2021, : 31 - 35
  • [32] Interpretable Machine Learning in Damage Detection Using Shapley Additive Explanations
    Movsessian, Artur
    Cava, David Garcia
    Tcherniak, Dmitri
    ASCE-ASME JOURNAL OF RISK AND UNCERTAINTY IN ENGINEERING SYSTEMS PART B-MECHANICAL ENGINEERING, 2022, 8 (02):
  • [33] Explainability of artificial intelligence methods, applications and challenges: A comprehensive survey
    Ding, Weiping
    Abdel-Basset, Mohamed
    Hawash, Hossam
    Ali, Ahmed M.
    INFORMATION SCIENCES, 2022, 615 : 238 - 292
  • [34] User-Centric Explainability in Healthcare: A Knowledge-Level Perspective of Informed Machine Learning
    Oberste L.
    Heinzl A.
    IEEE Transactions on Artificial Intelligence, 2023, 4 (04): : 840 - 857
  • [35] FOLD-SE: An Efficient Rule-Based Machine Learning Algorithm with Scalable Explainability
    Wang, Huaduo
    Gupta, Gopal
    PRACTICAL ASPECTS OF DECLARATIVE LANGUAGES, PADL 2024, 2023, 14512 : 37 - 53
  • [36] A Cognitive Load Theory (CLT) Analysis of Machine Learning Explainability, Transparency, Interpretability, and Shared Interpretability
    Fox, Stephen
    Rey, Vitor Fortes
    MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2024, 6 (03): : 1494 - 1509
  • [37] Human performance effects of combining counterfactual explanations with normative and contrastive explanations in supervised machine learning for automated decision assistance
    Gentile, Davide
    Donmez, Birsen
    Jamieson, Greg A.
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2025, 196
  • [38] Comprehension is a double-edged sword: Over-interpreting unspecified information in intelligible machine learning explanations
    Xuan, Yueqing
    Small, Edward
    Sokol, Kacper
    Hettiachchi, Danula
    Sanderson, Mark
    INTERNATIONAL JOURNAL OF HUMAN-COMPUTER STUDIES, 2025, 193
  • [39] Let's go to the Alien Zoo: Introducing an experimental framework to study usability of counterfactual explanations for machine learning
    Kuhl, Ulrike
    Artelt, Andre
    Hammer, Barbara
    FRONTIERS IN COMPUTER SCIENCE, 2023, 5
  • [40] Designing User-Centric Explanations for Medical Imaging with Informed Machine Learning
    Oberste, Luis
    Rueffer, Florian
    Aydinguel, Okan
    Rink, Johann
    Heinzl, Armin
    DESIGN SCIENCE RESEARCH FOR A NEW SOCIETY: SOCIETY 5.0, DESRIST 2023, 2023, 13873 : 470 - 484