Tell Me More: Black Box Explainability for APT Detection on System Provenance Graphs

被引:0
|
作者
Welter, Felix [1 ]
Wilkens, Florian [1 ]
Fischer, Mathias [1 ]
机构
[1] Univ Hamburg, Hamburg, Germany
来源
ICC 2023-IEEE INTERNATIONAL CONFERENCE ON COMMUNICATIONS | 2023年
关键词
advanced persistent threat; explainable artificial intelligence; data provenance; APT; AI; XAI;
D O I
10.1109/ICC45041.2023.10279468
中图分类号
TN [电子技术、通信技术];
学科分类号
0809 ;
摘要
Nowadays, companies, critical infrastructure and governments face cyber attacks every day ranging from simple denial-of-service and password guessing attacks to complex nation-state attack campaigns, so-called advanced persistent threats (APTs). Defenders employ intrusion detection systems (IDSs) among other tools to detect malicious activity and protect network assets. With the evolution of threats, detection techniques have followed with modern systems usually relying on some form of artificial intelligence (AI) or anomaly detection as part of their defense portfolio. While these systems are able to achieve higher accuracy in detecting APT activity, they cannot provide much context about the attack, as the underlying models are often too complex to interpret. This paper presents an approach to explain single predictions (i. e., detected attacks) of any graph-based anomaly detection systems. By systematically modifying the input graph of an anomaly and observing the output, we leverage a variation of permutation importance to identify parts of the graph that are likely responsible for the detected anomaly. Our approach treats the anomaly detection function as a black box and is thus applicable to any whole-graph explanation problems. Our results on two established datasets for APT detection (StreamSpot & DARPA TC Engagement Three) indicate that our approach can identify nodes that are likely part of the anomaly. We quantify this through our area under baseline (AuB) metric and show how the AuB is higher for anomalous graphs. Further analysis via the Wilcoxon rank-sum test confirms that these results are statistically significant with a p-value of 0.0041%.
引用
收藏
页码:3817 / 3823
页数:7
相关论文
共 2 条
  • [1] On the black-box explainability of object detection models for safe and trustworthy industrial applications
    Andres, Alain
    Martinez-Seras, Aitor
    Lana, Ibai
    Del Ser, Javier
    RESULTS IN ENGINEERING, 2024, 24
  • [2] In-Training Explainability Frameworks: A Method to Make Black-Box Machine Learning Models More Explainable
    Acun, Cagla
    Nasraoui, Olfa
    2023 IEEE INTERNATIONAL CONFERENCE ON WEB INTELLIGENCE AND INTELLIGENT AGENT TECHNOLOGY, WI-IAT, 2023, : 230 - 237