A survey of surveys on the use of visualization for interpreting machine learning models

被引:91
作者
Chatzimparmpas, Angelos [1 ]
Martins, Rafael M. [1 ]
Jusufi, Ilir [1 ]
Kerren, Andreas [1 ]
机构
[1] Linnaeus Univ, Dept Comp Sci & Media Technol, Vejdes Plats 7, SE-35195 Vaxjo, Sweden
关键词
Survey of surveys; literature review; visualization; explainable machine learning; interpretable machine learning; taxonomy; meta-analysis; OF-THE-ART; VISUAL ANALYTICS; BLACK-BOX; INTERPRETABILITY;
D O I
10.1177/1473871620904671
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Research in machine learning has become very popular in recent years, with many types of models proposed to comprehend and predict patterns and trends in data originating from different domains. As these models get more and more complex, it also becomes harder for users to assess and trust their results, since their internal operations are mostly hidden in black boxes. The interpretation of machine learning models is currently a hot topic in the information visualization community, with results showing that insights from machine learning models can lead to better predictions and improve the trustworthiness of the results. Due to this, multiple (and extensive) survey articles have been published recently trying to summarize the high number of original research papers published on the topic. But there is not always a clear definition of what these surveys cover, what is the overlap between them, which types of machine learning models they deal with, or what exactly is the scenario that the readers will find in each of them. In this article, we present a meta-analysis (i.e. a "survey of surveys") of manually collected survey papers that refer to the visual interpretation of machine learning models, including the papers discussed in the selected surveys. The aim of our article is to serve both as a detailed summary and as a guide through this survey ecosystem by acquiring, cataloging, and presenting fundamental knowledge of the state of the art and research opportunities in the area. Our results confirm the increasing trend of interpreting machine learning with visualizations in the past years, and that visualization can assist in, for example, online training processes of deep learning models and enhancing trust into machine learning. However, the question of exactly how this assistance should take place is still considered as an open challenge of the visualization community.
引用
收藏
页码:207 / 233
页数:27
相关论文
共 70 条
[1]   Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda [J].
Abdul, Ashraf ;
Vermeulen, Jo ;
Wang, Danding ;
Lim, Brian ;
Kankanhalli, Mohan .
PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
[2]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[3]  
Alharbi M., 2018, C COMPUTER GRAPHICS, P143
[4]  
ALHARBI N, 2017, EUROVIS SHORT PAPERS, P133, DOI DOI 10.2312/EUROVISSHORT.20171146
[5]   Power to the People: The Role of Humans in Interactive Machine Learning [J].
Amershi, Saleema ;
Cakmak, Maya ;
Knox, W. Bradley ;
Kulesza, Todd .
AI MAGAZINE, 2014, 35 (04) :105-120
[6]  
[Anonymous], 2014, PROC 18 INT C EVAL A, DOI [DOI 10.1145/2601248.2601268, 10.1145/2601248]
[7]  
[Anonymous], 2006, P NATL C ARTIFICIAL
[8]   On Pixel-Wise Explanations for Non-Linear Classifier Decisions by Layer-Wise Relevance Propagation [J].
Bach, Sebastian ;
Binder, Alexander ;
Montavon, Gregoire ;
Klauschen, Frederick ;
Mueller, Klaus-Robert ;
Samek, Wojciech .
PLOS ONE, 2015, 10 (07)
[9]  
Baehrens D, 2010, J MACH LEARN RES, V11, P1803
[10]  
Bastian Mathieu., 2009, P INT AAAI C WEB SOC, V3, P361, DOI DOI 10.1609/ICWSM.V3I1.13937