The coming of age of interpretable and explainable machine learning models

被引:58
作者
Lisboa, P. J. G. [1 ]
Saralajew, S. [2 ]
Vellido, A. [3 ,4 ]
Fernandez-Domenech, R. [3 ,4 ]
Villmann, T. [5 ]
机构
[1] Liverpool John Moores Univ, Liverpool, England
[2] NEC Labs Europe GmbH, Heidelberg, Germany
[3] UPC BarcelonaTech, Dept Comp Sci, Barcelona, Spain
[4] UPC Res Ctr, IDEAI, Barcelona, Spain
[5] Univ Appl Sci Mittweida, Saxon Inst Comp Intelligence & Machine Learning, Mittweida, Germany
关键词
XAI; Interpretable ML; Explainable ML; Transparent AI; AUTOMATED DECISION-MAKING; NEURAL-NETWORKS; ARTIFICIAL-INTELLIGENCE; CLASSIFICATION; EXPLANATION;
D O I
10.1016/j.neucom.2023.02.040
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Machine-learning-based systems are now part of a wide array of real-world applications seamlessly embedded in the social realm. In the wake of this realization, strict legal regulations for these systems are currently being developed, addressing some of the risks they may pose. This is the coming of age of the concepts of interpretability and explainability in machine-learning-based data analysis, which can no longer be seen just as an academic research problem. In this paper, we discuss explainable and interpretable machine learning as post hoc and ante-hoc strategies to address regulatory restrictions and highlight several aspects related to them, including their evaluation and assessment and the legal boundaries of application.(c) 2023 Elsevier B.V. All rights reserved.
引用
收藏
页码:25 / 39
页数:15
相关论文
共 127 条
[11]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[12]  
Beaudouin V, 2020, Arxiv, DOI arXiv:2003.07703
[13]   Representation Learning: A Review and New Perspectives [J].
Bengio, Yoshua ;
Courville, Aaron ;
Vincent, Pascal .
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2013, 35 (08) :1798-1828
[14]   Learning Deep Architectures for AI [J].
Bengio, Yoshua .
FOUNDATIONS AND TRENDS IN MACHINE LEARNING, 2009, 2 (01) :1-127
[15]   The state of artificial intelligence-based FDA-approved medical devices and algorithms: an online database [J].
Benjamens, Stan ;
Dhunnoo, Pranavsingh ;
Mesko, Bertalan .
NPJ DIGITAL MEDICINE, 2020, 3 (01)
[16]   Prototype-based models in machine learning [J].
Biehl, Michael ;
Hammer, Barbara ;
Villmann, Thomas .
WILEY INTERDISCIPLINARY REVIEWS-COGNITIVE SCIENCE, 2016, 7 (02) :92-111
[17]   GraphGONet: a self-explaining neural network encapsulating the Gene Ontology graph for phenotype prediction on gene expression [J].
Bourgeais, Victoria ;
Zehraoui, Farida ;
Hanczar, Blaise .
BIOINFORMATICS, 2022, 38 (09) :2504-2511
[18]   Odds ratio function estimation using a generalized additive neural network [J].
Bras-Geraldes, Carlos ;
Papoila, Ana ;
Xufre, Patricia .
NEURAL COMPUTING & APPLICATIONS, 2020, 32 (08) :3459-3474
[19]   Random forests [J].
Breiman, L .
MACHINE LEARNING, 2001, 45 (01) :5-32
[20]  
Breiman L., 1984, Classification and regression trees, DOI DOI 10.1201/9781315139470