Post-hoc vs ante-hoc explanations: xAI design guidelines for data scientists

被引:60
作者
Retzlaff, Carl O. [1 ]
Angerschmid, Alessa [1 ,3 ]
Saranti, Anna [1 ,3 ]
Schneeberger, David [3 ]
Roettger, Richard [2 ]
Mueller, Heimo [4 ]
Holzinger, Andreas [1 ,3 ]
机构
[1] Univ Nat Resources & Life Sci BOKU, Inst Forest Engn, Dept Forest & Soil Sci, Human Ctr AI Lab, Vienna, Austria
[2] Univ South Denmark, Dept Math & Comp Sci, Odense, Denmark
[3] Med Univ Graz, Inst Med Informat Stat & Documentat, Graz, Austria
[4] Med Univ Graz, Diagnost & Res Ctr Mol Biomed, Informat Sci & Machine Learning Grp, Graz, Austria
来源
COGNITIVE SYSTEMS RESEARCH | 2024年 / 86卷
基金
奥地利科学基金会;
关键词
Explainable AI; xAI; Post-hoc; Ante-hoc; Explanations; Guideline; EXPLAINABLE ARTIFICIAL-INTELLIGENCE; BLACK-BOX; AI; CAUSABILITY; DECISIONS; QUALITY;
D O I
10.1016/j.cogsys.2024.101243
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The growing field of explainable Artificial Intelligence (xAI) has given rise to a multitude of techniques and methodologies, yet this expansion has created a growing gap between existing xAI approaches and their practical application. This poses a considerable obstacle for data scientists striving to identify the optimal xAI technique for their needs. To address this problem, our study presents a customized decision support framework to aid data scientists in choosing a suitable xAI approach for their use -case. Drawing from a literature survey and insights from interviews with five experienced data scientists, we introduce a decision tree based on the trade-offs inherent in various xAI approaches, guiding the selection between six commonly used xAI tools. Our work critically examines six prevalent ante -hoc and post -hoc xAI methods, assessing their applicability in real -world contexts through expert interviews. The aim is to equip data scientists and policymakers with the capacity to select xAI methods that not only demystify the decision -making process, but also enrich user understanding and interpretation, ultimately advancing the application of xAI in practical settings.
引用
收藏
页数:17
相关论文
共 103 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]   From Artificial Intelligence to Explainable Artificial Intelligence in Industry 4.0: A Survey on What, How, and Where [J].
Ahmed, Imran ;
Jeon, Gwanggil ;
Piccialli, Francesco .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (08) :5031-5042
[3]   Fairness and Explanation in AI-Informed Decision Making [J].
Angerschmid, Alessa ;
Zhou, Jianlong ;
Theuermann, Kevin ;
Chen, Fang ;
Holzinger, Andreas .
MACHINE LEARNING AND KNOWLEDGE EXTRACTION, 2022, 4 (02) :556-579
[4]  
[Anonymous], 2023, Google forms
[5]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[6]   Towards Formal XAI: Formally Approximate Minimal Explanations of Neural Networks [J].
Bassan, Shahaf ;
Katz, Guy .
TOOLS AND ALGORITHMS FOR THE CONSTRUCTION AND ANALYSIS OF SYSTEMS, PT I, TACAS 2023, 2023, 13993 :187-207
[7]  
Baxter P, 2008, QUAL REP, V13, P544
[8]   Principles and Practice of Explainable Machine Learning [J].
Belle, Vaishak ;
Papantonis, Ioannis .
FRONTIERS IN BIG DATA, 2021, 4
[9]   Legal and Technical Feasibility of the GDPR's Quest for Explanation of Algorithmic Decisions: of Black Boxes, White Boxes and Fata Morganas [J].
Brkan, Maja ;
Bonnet, Gregory .
EUROPEAN JOURNAL OF RISK REGULATION, 2020, 11 (01) :18-50
[10]  
Bubeck S, 2021, Arxiv, DOI [arXiv:2105.12806, 10.48550/arXiv.2105.12806]