Explaining Machine Learning: Adding Interactivity to Develop Decision-making Visualization Expectations

被引:1
作者
Heleno, Marco [1 ,2 ]
Correia, Nuno [1 ,2 ]
Carvalhais, Miguel [3 ]
机构
[1] NOVA Univ Lisbon, NOVA LINCS, Lisbon, Portugal
[2] NOVA Univ Lisbon, Fac Sci & Technol, Lisbon, Portugal
[3] Univ Porto, Fac Belas Artes, INESC TEC, Porto, Portugal
来源
PROCEEDINGS OF THE 9TH INTERNATIONAL CONFERENCE ON DIGITAL AND INTERACTIVE ARTS (ARTECH 2019) | 2019年
关键词
Explainable Artificial Intelligence; Interaction; Visualization; Machine Learning; Perceptrons; Decision-making; Black box;
D O I
10.1145/3359852.3359918
中图分类号
J [艺术];
学科分类号
13 ; 1301 ;
摘要
Multi-layered Perceptrons, also widely known as Artificial Neural Networks, are one of the most used subsets of Machine Learning systems. They consist of a series of layered and interconnected Perceptrons, also referred to as Artificial Neurons, where each layer can perform different types of transformations on data. Because Multi-layered Perceptrons can consist of 10 to 30 layers, these transformations quickly become extremely complex and incomprehensible and these systems therefore become black boxes. This, coupled with the increasing presence of these systems in our lives, has led us to focus on the necessity of their humanunderstandability and transparency. As Information Visualization has proved to be one of the most effective ways of explaining datasets, our hypothesis is that it can be equally useful for understanding the processes of decision-making in Machine Learning systems. We approached this by developing an algorithmic data-driven visualisation of the most elementary processing element in these systems: the Perceptron. Our latest iteration, which received a user interface, has shown us to allow debugging, generating human expectation of the Perceptron's decision-making. This interdisciplinary and practice-based research focused on investigating the inner workings of a Perceptron and was developed through iterative cycles of research, design and implementation.
引用
收藏
页数:3
相关论文
共 16 条
  • [1] Baraniuk Chris, 2018, GET CREATIVE IT MUST
  • [2] Doshi-Velez F, 2017, Arxiv, DOI [arXiv:1702.08608, 10.48550/arXiv.1702.08608]
  • [3] Gill N., 2018, INTRO MACHINE LEARNI
  • [4] Grau O, 2003, LEONARDO SER, P1
  • [5] XAI-Explainable artificial intelligence
    Gunning, David
    Stefik, Mark
    Choi, Jaesik
    Miller, Timothy
    Stumpf, Simone
    Yang, Guang-Zhong
    [J]. SCIENCE ROBOTICS, 2019, 4 (37)
  • [6] Hao Karen., 2018, HARVARD RES WANT SCH
  • [7] Hao Karen, 2018, ESTABLISHING CODE ET
  • [8] Artificial intelligence in healthcare: past, present and future
    Jiang, Fei
    Jiang, Yong
    Zhi, Hui
    Dong, Yi
    Li, Hao
    Ma, Sufeng
    Wang, Yilong
    Dong, Qiang
    Shen, Haipeng
    Wang, Yongjun
    [J]. STROKE AND VASCULAR NEUROLOGY, 2017, 2 (04) : 230 - 243
  • [9] MAEDA J., 1999, DESIGN BY NUMBERS
  • [10] Miller T, 2018, Arxiv, DOI [arXiv:1706.07269, 10.48550/ARXIV.1706.07269, DOI 10.48550/ARXIV.1706.07269]