Perturbation-based methods for explaining deep neural networks: A survey

被引:94
作者
Ivanovs, Maksims [1 ]
Kadikis, Roberts [1 ]
Ozols, Kaspars [1 ]
机构
[1] Inst Elect & Comp Sci, Dzerbenes Str 14, LV-1006 Riga, Latvia
关键词
Deep learning; Explainable artificial intelligence; Perturbation-based methods; BLACK-BOX;
D O I
10.1016/j.patrec.2021.06.030
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep neural networks (DNNs) have achieved state-of-the-art results in a broad range of tasks, in partic-ular the ones dealing with the perceptual data. However, full-scale application of DNNs in safety-critical areas is hindered by their black box-like nature, which makes their inner workings nontransparent. As a response to the black box problem, the field of explainable artificial intelligence (XAI) has recently emerged and is currently rapidly growing. The present survey is concerned with perturbation-based XAI methods, which allow to explore DNN models by perturbing their input and observing changes in the output. We present an overview of the most recent research focusing on the differences and similarities in the applications of perturbation-based methods to different data types, from extensively studied per-turbations of images to the just emerging research on perturbations of video, natural language, software code, and reinforcement learning entities. (c) 2021 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY license ( http://creativecommons.org/licenses/by/4.0/ )
引用
收藏
页码:228 / 234
页数:7
相关论文
共 75 条
  • [1] Trends and Trajectories for Explainable, Accountable and Intelligible Systems: An HCI Research Agenda
    Abdul, Ashraf
    Vermeulen, Jo
    Wang, Danding
    Lim, Brian
    Kankanhalli, Mohan
    [J]. PROCEEDINGS OF THE 2018 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS (CHI 2018), 2018,
  • [2] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [3] Adebayo J, 2018, ADV NEUR IN, V31
  • [4] Ahmad MA, 2018, ACM-BCB'18: PROCEEDINGS OF THE 2018 ACM INTERNATIONAL CONFERENCE ON BIOINFORMATICS, COMPUTATIONAL BIOLOGY, AND HEALTH INFORMATICS, P559, DOI [10.1145/3233547.3233667, 10.1109/ICHI.2018.00095]
  • [5] Amodei P., 2016, AMODEI OLAH STEINHAR
  • [6] Ancona Marco, 2019, Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, P169, DOI [DOI 10.1007/978-3-030-28954-6_9, 10.1007/978-3-030-28954-69]
  • [7] [Anonymous], 2018, The Guardian 'Medieval' Cholera Outbreak Exposes Huge Challenges in Zimbabwe
  • [8] [Anonymous], 2015, BBC NEWS
  • [9] Excitation Backprop for RNNs
    Bargal, Sarah Adel
    Zunino, Andrea
    Kim, Donghyun
    Zhang, Jianming
    Murino, Vittorio
    Sclaroff, Stan
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1440 - 1449
  • [10] Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
    Barredo Arrieta, Alejandro
    Diaz-Rodriguez, Natalia
    Del Ser, Javier
    Bennetot, Adrien
    Tabik, Siham
    Barbado, Alberto
    Garcia, Salvador
    Gil-Lopez, Sergio
    Molina, Daniel
    Benjamins, Richard
    Chatila, Raja
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2020, 58 : 82 - 115