Adversarial attacks and defenses in explainable artificial intelligence: A survey

被引:23
作者
Baniecki, Hubert [1 ]
Biecek, Przemyslaw [1 ,2 ]
机构
[1] Univ Warsaw, Warsaw, Poland
[2] Warsaw Univ Technol, Warsaw, Poland
关键词
Explainability; Interpretability; Adversarial machine learning; Safety; Security; Robustness; Review; LEARNING-MODELS; EXPLANATION; FAIRNESS;
D O I
10.1016/j.inffus.2024.102303
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Explainable artificial intelligence (XAI) methods are portrayed as a remedy for debugging and trusting statistical and deep learning models, as well as interpreting their predictions. However, recent advances in adversarial machine learning (AdvML) highlight the limitations and vulnerabilities of state -of -the -art explanation methods, putting their security and trustworthiness into question. The possibility of manipulating, fooling or fairwashing evidence of the model's reasoning has detrimental consequences when applied in highstakes decision-making and knowledge discovery. This survey provides a comprehensive overview of research concerning adversarial attacks on explanations of machine learning models, as well as fairness metrics. We introduce a unified notation and taxonomy of methods facilitating a common ground for researchers and practitioners from the intersecting research fields of AdvML and XAI. We discuss how to defend against attacks and design robust interpretation methods. We contribute a list of existing insecurities in XAI and outline the emerging research directions in adversarial XAI (AdvXAI). Future work should address improving explanation methods and evaluation protocols to take into account the reported safety issues.
引用
收藏
页数:14
相关论文
共 106 条
  • [1] Explaining individual predictions when features are dependent: More accurate approximations to Shapley values
    Aas, Kjersti
    Jullum, Martin
    Loland, Anders
    [J]. ARTIFICIAL INTELLIGENCE, 2021, 298
  • [2] Adebayo J, 2018, ADV NEUR IN, V31
  • [3] Aivodji U., 2021, Advances in Neural Information Processing Systems, V34, P14822
  • [4] Aivodji U, 2019, PR MACH LEARN RES, V97
  • [5] Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
    Ali, Sajid
    Abuhmed, Tamer
    El-Sappagh, Shaker
    Muhammad, Khan
    Alonso-Moral, Jose M.
    Confalonieri, Roberto
    Guidotti, Riccardo
    Del Ser, Javier
    Diaz-Rodriguez, Natalia
    Herrera, Francisco
    [J]. INFORMATION FUSION, 2023, 99
  • [6] Alvarez-Melis D, 2018, ADV NEUR IN, V31
  • [7] Amgoud L, 2022, PROCEEDINGS OF THE THIRTY-FIRST INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, IJCAI 2022, P636
  • [8] Using arguments for making and explaining decisions
    Amgoud, Leila
    Prade, Henri
    [J]. ARTIFICIAL INTELLIGENCE, 2009, 173 (3-4) : 413 - 436
  • [9] Ancona M, 2018, 6 INT C LEARN REPR I
  • [10] Anders C., 2020, P MACHINE LEARNING R, P314