SoK: Explainable Machine Learning for Computer Security Applications

被引:10
作者
Nadeem, Azqa [1 ]
Vos, Daniel [1 ]
Cao, Clinton [1 ]
Pajola, Luca [2 ]
Dieck, Simon [1 ]
Baumgartner, Robert [1 ]
Verwer, Sicco [1 ]
机构
[1] Delft Univ Technol, Delft, Netherlands
[2] Univ Padua, Padua, Italy
来源
2023 IEEE 8TH EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY, EUROS&P | 2023年
关键词
XAI; Machine learning; Cyber security; AI;
D O I
10.1109/EuroSP57164.2023.00022
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Explainable Artificial Intelligence (XAI) aims to improve the transparency of machine learning (ML) pipelines. We systematize the increasingly growing (but fragmented) microcosm of studies that develop and utilize XAI methods for defensive and offensive cybersecurity tasks. We identify 3 cybersecurity stakeholders, i.e., model users, designers, and adversaries, who utilize XAI for 4 distinct objectives within an ML pipeline, namely 1) XAI-enabled user assistance, 2) XAI-enabled model verification, 3) explanation verification & robustness, and 4) offensive use of explanations. Our analysis of the literature indicates that many of the XAI applications are designed with little understanding of how they might be integrated into analyst workflows user studies for explanation evaluation are conducted in only 14% of the cases. The security literature sometimes also fails to disentangle the role of the various stakeholders, e.g., by providing explanations to model users and designers while also exposing them to adversaries. Additionally, the role of model designers is particularly minimized in the security literature. To this end, we present an illustrative tutorial for model designers, demonstrating how XAI can help with model verification. We also discuss scenarios where interpretability by design may be a better alternative. The systematization and the tutorial enable us to challenge several assumptions, and present open problems that can help shape the future of XAI research within cybersecurity.
引用
收藏
页码:221 / 240
页数:20
相关论文
共 140 条
  • [1] Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
    Adadi, Amina
    Berrada, Mohammed
    [J]. IEEE ACCESS, 2018, 6 : 52138 - 52160
  • [2] Adebayo J, 2018, ADV NEUR IN, V31
  • [3] Explaining Deep Learning-Based Traffic Classification Using a Genetic Algorithm
    Ahn, Seyoung
    Kim, Jeehyeong
    Park, Soo Young
    Cho, Sunghyun
    [J]. IEEE ACCESS, 2021, 9 : 4738 - 4751
  • [4] Aivodji Ulrich, 2019, PR MACH LEARN RES, V97, P161
  • [5] Akerman S, 2019, Arxiv, DOI arXiv:1906.07921
  • [6] Improving Interpretability for Cyber Vulnerability Assessment Using Focus and Context Visualizations
    Alperin, Kenneth B.
    Wollaber, Allan B.
    Gomez, Steven R.
    [J]. 2020 IEEE SYMPOSIUM ON VISUALIZATION FOR CYBER SECURITY (VIZSEC 2020), 2020, : 30 - 39
  • [7] Alshiekh M, 2018, AAAI CONF ARTIF INTE, P2669
  • [8] Alvarez-Melis D, 2018, Arxiv, DOI arXiv:1806.08049
  • [9] Angelini M., 2017, IEEE S VIS CYB SEC V, P1
  • [10] Anjomshoae S, 2019, AAMAS '19: PROCEEDINGS OF THE 18TH INTERNATIONAL CONFERENCE ON AUTONOMOUS AGENTS AND MULTIAGENT SYSTEMS, P1078