Harnessing Prior Knowledge for Explainable Machine Learning: An Overview

被引:10
作者
Beckh, Katharina [1 ]
Mueller, Sebastian [2 ]
Jakobs, Matthias [3 ]
Toborek, Vanessa [2 ]
Tan, Hanxiao [3 ]
Fischer, Raphael [3 ]
Welke, Pascal [2 ]
Houben, Sebastian [4 ]
von Rueden, Laura [1 ]
机构
[1] Fraunhofer IAIS, St Augustin, Germany
[2] Univ Bonn, Bonn, Germany
[3] TU Dortmund Univ, Dortmund, Germany
[4] Hsch Bonn Rhein Sieg, Bonn, Germany
来源
2023 IEEE CONFERENCE ON SECURE AND TRUSTWORTHY MACHINE LEARNING, SATML | 2023年
关键词
Machine learning; Taxonomy; Human computer interaction; Knowledge representation; NEURAL-NETWORKS; BLACK-BOX;
D O I
10.1109/SaTML54575.2023.00038
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
The application of complex machine learning models has elicited research to make them more explainable. However, most explainability methods cannot provide insight beyond the given data, requiring additional information about the context. We argue that harnessing prior knowledge improves the accessibility of explanations. We hereby present an overview of integrating prior knowledge into machine learning systems in order to improve explainability. We introduce a categorization of current research into three main categories which integrate knowledge either into the machine learning pipeline, into the explainability method or derive knowledge from explanations. To classify the papers, we build upon the existing taxonomy of informed machine learning and extend it from the perspective of explainability. We conclude with open challenges and research directions.
引用
收藏
页码:450 / 463
页数:14
相关论文
共 93 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
Adilova L., 2017, P WORKSHOP INTERACTI, V1880, P1
[3]  
[Anonymous], 2016, Regulation (EU) 2016/679 of the European parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free Movement of such data, and repealing directive 95/46/EC (General Data Protection Regulation)
[4]  
Aumann RobertJ., 1974, Values of Non-Atomic Games
[5]  
Balayan V., 2020, HAMLETS WORKSHOP 202
[6]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[7]   eXplainable Cooperative Machine Learning with NOVA [J].
Baur, Tobias ;
Heimerl, Alexander ;
Lingenfelser, Florian ;
Wagner, Johannes ;
Valstar, Michel F. ;
Schuller, Bjoern ;
Andre, Elisabeth .
KUNSTLICHE INTELLIGENZ, 2020, 34 (02) :143-164
[8]   Combining machine learning and process engineering physics towards enhanced accuracy and explainability of data-driven models [J].
Bikmukhametov, Timur ;
Jaschke, Johannes .
COMPUTERS & CHEMICAL ENGINEERING, 2020, 138
[9]   Layer-Wise Relevance Propagation for Neural Networks with Local Renormalization Layers [J].
Binder, Alexander ;
Montavon, Gregoire ;
Lapuschkin, Sebastian ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING - ICANN 2016, PT II, 2016, 9887 :63-71
[10]  
Bouraoui Z, 2019, Arxiv, DOI arXiv:1912.06612