Explainable Artificial Intelligence for Tabular Data: A Survey

被引:74
作者
Sahakyan, Maria [1 ,2 ]
Aung, Zeyar [1 ]
Rahwan, Talal [2 ]
机构
[1] Khalifa Univ, Dept Elect Engn & Comp Sci, Abu Dhabi, U Arab Emirates
[2] New York Univ Abu Dhabi NYUAD, Dept Comp Sci, Abu Dhabi, U Arab Emirates
关键词
Data models; Solid modeling; Numerical models; Neural networks; Medical services; Licenses; Inspection; Black-box models; explainable artificial intelligence; machine learning; model interpretability; RULE EXTRACTION; NEURAL-NETWORKS; BLACK-BOX; FEATURE-SELECTION; DECISION TREES; MODELS; CLASSIFICATIONS; CLASSIFIERS; ALGORITHM; AI;
D O I
10.1109/ACCESS.2021.3116481
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Machine learning techniques are increasingly gaining attention due to their widespread use in various disciplines across academia and industry. Despite their tremendous success, many such techniques suffer from the "black-box" problem, which refers to situations where the data analyst is unable to explain why such techniques arrive at certain decisions. This problem has fuelled interest in Explainable Artificial Intelligence (XAI), which refers to techniques that can easily be interpreted by humans. Unfortunately, many of these techniques are not suitable for tabular data, which is surprising given the importance and widespread use of tabular data in critical applications such as finance, healthcare, and criminal justice. Also surprising is the fact that, despite the vast literature on XAI, there are still no survey articles to date that focus on tabular data. Consequently, despite the existing survey articles that cover a wide range of XAI techniques, it remains challenging for researchers working on tabular data to go through all of these surveys and extract the techniques that are suitable for their analysis. Our article fills this gap by providing a comprehensive and up-to-date survey of the XAI techniques that are relevant to tabular data. Furthermore, we categorize the references covered in our survey, indicating the type of the model being explained, the approach being used to provide the explanation, and the XAI problem being addressed. Our article is the first to provide researchers with a map that helps them navigate the XAI literature in the context of tabular data.
引用
收藏
页码:135392 / 135422
页数:31
相关论文
共 223 条
[1]  
Adadi Amina, 2020, Embedded Systems and Artificial Intelligence. Proceedings of ESAI 2019. Advances in Intelligent Systems and Computing (AISC 1076), P327, DOI 10.1007/978-981-15-0947-6_31
[2]  
Adebayo J., 2016, ARXIV161104967
[3]  
Adebayo J. A., 2016, THESIS MIT CAMBRIDGE
[4]   Auditing black-box models for indirect influence [J].
Adler, Philip ;
Falk, Casey ;
Friedler, Sorelle A. ;
Nix, Tionney ;
Rybeck, Gabriel ;
Scheidegger, Carlos ;
Smith, Brandon ;
Venkatasubramanian, Suresh .
KNOWLEDGE AND INFORMATION SYSTEMS, 2018, 54 (01) :95-122
[5]  
Ahmad MA, 2018, ACM-BCB'18: PROCEEDINGS OF THE 2018 ACM INTERNATIONAL CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS, P559, DOI [10.1145/3233547.3233667, 10.1109/ICHI.2018.00095]
[6]   Classifiers consensus system approach for credit scoring [J].
Ala'raj, Maher ;
Abbod, Maysam F. .
KNOWLEDGE-BASED SYSTEMS, 2016, 104 :89-105
[7]  
Alipour Kamran, 2020, 2020 IEEE International Conference on Humanized Computing and Communication with Artificial Intelligence (HCCAI), P25, DOI 10.1109/HCCAI49649.2020.00010
[8]  
Alvarez-Melis D., 2018, ICML Workshop on Human Interpretability in Machine Learning
[9]  
Angwin J., 2022, Machine bias: There's software used across the country to predict future criminals, and it's biased against blacks, P254, DOI DOI 10.1201/9781003278290-37
[10]  
[Anonymous], ARXIV161107429