Directive Explanations for Actionable Explainability in Machine Learning Applications

被引:0
|
作者
Singh, Ronal [1 ]
Miller, Tim [1 ]
Lyons, Henrietta [1 ]
Sonenberg, Liz [1 ]
Velloso, Eduardo [1 ]
Vetere, Frank [1 ]
Howe, Piers [2 ]
Dourish, Paul [3 ]
机构
[1] Univ Melbourne, Sch Comp & Informat Syst, Melbourne, Vic 3010, Australia
[2] Univ Melbourne, Melbourne Sch Psychol Sci, Melbourne, Vic 3010, Australia
[3] Univ Calif Irvine, Donald Bren Sch Informat & Comp Sci, Irvine, CA 92697 USA
基金
澳大利亚研究理事会;
关键词
Explainable AI; directive explanations; counterfactual explanations; BLACK-BOX;
D O I
10.1145/3579363
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
In this article, we show that explanations of decisions made by machine learning systems can be improved by not only explaining why a decision was made but also explaining how an individual could obtain their desired outcome. We formally define the concept of directive explanations (those that offer specific actions an individual could take to achieve their desired outcome), introduce two forms of directive explanations (directive-specific and directive-generic), and describe how these can be generated computationally. We investigate people's preference for and perception toward directive explanations through two online studies, one quantitative and the other qualitative, each covering two domains (the credit scoring domain and the employee satisfaction domain). We find a significant preference for both forms of directive explanations compared to non-directive counterfactual explanations. However, we also find that preferences are affected by many aspects, including individual preferences and social factors. We conclude that deciding what type of explanation to provide requires information about the recipients and other contextual information. This reinforces the need for a human-centered and context-specific approach to explainable AI.
引用
收藏
页数:26
相关论文
共 50 条
  • [1] A Survey on the Explainability of Supervised Machine Learning
    Burkart, Nadia
    Huber, Marco F.
    JOURNAL OF ARTIFICIAL INTELLIGENCE RESEARCH, 2021, 70 : 245 - 317
  • [2] Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI Interactions
    Bhattacharya, Aditya
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [3] Demystifying the black box: an overview of explainability methods in machine learning
    Kinger S.
    Kulkarni V.
    International Journal of Computers and Applications, 2024, 46 (02) : 90 - 100
  • [4] Leveraging explanations in interactive machine learning: An overview
    Teso, Stefano
    Alkan, Oznur
    Stammer, Wolfgang
    Daly, Elizabeth
    FRONTIERS IN ARTIFICIAL INTELLIGENCE, 2023, 6
  • [5] Explainability of Machine Learning Models for Bankruptcy Prediction
    Park, Min Sue
    Son, Hwijae
    Hyun, Chongseok
    Hwang, Hyung Ju
    IEEE ACCESS, 2021, 9 : 124887 - 124899
  • [6] Adversarial Robustness and Explainability of Machine Learning Models
    Gafur, Jamil
    Goddard, Steve
    Lai, William K. M.
    PRACTICE AND EXPERIENCE IN ADVANCED RESEARCH COMPUTING 2024, PEARC 2024, 2024,
  • [7] Counterfactual Explanations and Algorithmic Recourses for Machine Learning: A Review
    Verma, Sahil
    Boonsanong, Varich
    Hoang, Minh
    Hines, Keegan
    Dickerson, John
    Shah, Chirag
    ACM COMPUTING SURVEYS, 2024, 56 (12)
  • [8] Interpretability and Explainability of Machine Learning Models: Achievements and Challenges
    Henriques, J.
    Rocha, T.
    de Carvalho, P.
    Silva, C.
    Paredes, S.
    INTERNATIONAL CONFERENCE ON BIOMEDICAL AND HEALTH INFORMATICS 2022, ICBHI 2022, 2024, 108 : 81 - 94
  • [9] A-XAI: adversarial machine learning for trustable explainability
    Nishita Agrawal
    Isha Pendharkar
    Jugal Shroff
    Jatin Raghuvanshi
    Akashdip Neogi
    Shruti Patil
    Rahee Walambe
    Ketan Kotecha
    AI and Ethics, 2024, 4 (4): : 1143 - 1174
  • [10] A social evaluation of the perceived goodness of explainability in machine learning
    Wanner, Jonas
    Herm, Lukas-Valentin
    Heinrich, Kai
    Janiesch, Christian
    JOURNAL OF BUSINESS ANALYTICS, 2022, 5 (01) : 29 - 50