Experimental Analysis of Trustworthy In-Vehicle Intrusion Detection System Using eXplainable Artificial Intelligence (XAI)

被引:23
|
作者
Lundberg, Hampus [1 ]
Mowla, Nishat, I [2 ]
Abedin, Sarder Fakhrul [1 ]
Thar, Kyi [1 ]
Mahmood, Aamir [1 ]
Gidlund, Mikael [1 ]
Raza, Shahid [2 ]
机构
[1] Mid Sweden Univ, Dept Informat Syst & Technol, S-85170 Sundsvall, Sweden
[2] RISE, Lindholmspiren 3A, S-41756 Gothenburg, Sweden
基金
欧盟地平线“2020”;
关键词
Artificial intelligence; Intrusion detection; Automotive engineering; Behavioral sciences; Random forests; Deep learning; Trust management; Automotive; intrusion detection system; machine learning; deep learning; XAI; trustworthiness;
D O I
10.1109/ACCESS.2022.3208573
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Anomaly-based In-Vehicle Intrusion Detection System (IV-IDS) is one of the protection mechanisms to detect cyber attacks on automotive vehicles. Using artificial intelligence (AI) for anomaly detection to thwart cyber attacks is promising but suffers from generating false alarms and making decisions that are hard to interpret. Consequently, this issue leads to uncertainty and distrust towards such IDS design unless it can explain its behavior, e.g., by using eXplainable AI (XAI). In this paper, we consider the XAI-powered design of such an IV-IDS using CAN bus data from a public dataset, named "Survival". Novel features are engineered, and a Deep Neural Network (DNN) is trained over the dataset. A visualization-based explanation, "VisExp", is created to explain the behavior of the AI-based IV-IDS, which is evaluated by experts in a survey, in relation to a rule-based explanation. Our results show that experts' trust in the AI-based IV-IDS is significantly increased when they are provided with VisExp (more so than the rule-based explanation). These findings confirm the effect, and by extension the need, of explainability in automated systems, and VisExp, being a source of increased explainability, shows promise in helping involved parties gain trust in such systems.
引用
收藏
页码:102831 / 102841
页数:11
相关论文
共 50 条
  • [1] A Review of Trustworthy and Explainable Artificial Intelligence (XAI)
    Chamola, Vinay
    Hassija, Vikas
    Sulthana, A. Razia
    Ghosh, Debshishu
    Dhingra, Divyansh
    Sikdar, Biplab
    IEEE ACCESS, 2023, 11 : 78994 - 79015
  • [2] An Intrusion Detection System over the IoT Data Streams Using eXplainable Artificial Intelligence (XAI)
    Alabbadi, Adel
    Bajaber, Fuad
    SENSORS, 2025, 25 (03)
  • [3] Robust Network Intrusion Detection Through Explainable Artificial Intelligence (XAI)
    Barnard, Pieter
    Marchetti, Nicola
    Dasilva, Luiz A.
    IEEE Networking Letters, 2022, 4 (03): : 167 - 171
  • [4] Explainable Artificial Intelligence for Intrusion Detection System
    Patil, Shruti
    Varadarajan, Vijayakumar
    Mazhar, Siddiqui Mohd
    Sahibzada, Abdulwodood
    Ahmed, Nihal
    Sinha, Onkar
    Kumar, Satish
    Shaw, Kailash
    Kotecha, Ketan
    ELECTRONICS, 2022, 11 (19)
  • [5] Explainable Artificial Intelligence (XAI) for Intrusion Detection and Mitigation in Intelligent Connected Vehicles: A Review
    Nwakanma, Cosmas Ifeanyi
    Ahakonye, Love Allen Chijioke
    Njoku, Judith Nkechinyere
    Odirichukwu, Jacinta Chioma
    Okolie, Stanley Adiele
    Uzondu, Chinebuli
    Nweke, Christiana Chidimma Ndubuisi
    Kim, Dong-Seong
    APPLIED SCIENCES-BASEL, 2023, 13 (03):
  • [6] Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence
    Ali, Sajid
    Abuhmed, Tamer
    El-Sappagh, Shaker
    Muhammad, Khan
    Alonso-Moral, Jose M.
    Confalonieri, Roberto
    Guidotti, Riccardo
    Del Ser, Javier
    Diaz-Rodriguez, Natalia
    Herrera, Francisco
    INFORMATION FUSION, 2023, 99
  • [7] Dynamic Voting based Explainable Intrusion Detection System for In-vehicle Network
    Mowla, Nishat, I
    Rosell, Joakim
    Vahidi, Arash
    2022 24TH INTERNATIONAL CONFERENCE ON ADVANCED COMMUNICATION TECHNOLOGY (ICACT): ARITIFLCIAL INTELLIGENCE TECHNOLOGIES TOWARD CYBERSECURITY, 2022, : 406 - +
  • [8] Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems Using Decision Tree Model
    Mahbooba, Basim
    Timilsina, Mohan
    Sahal, Radhya
    Serrano, Martin
    COMPLEXITY, 2021, 2021
  • [9] An Explainable Artificial Intelligence Approach for a Trustworthy Spam Detection
    Ibrahim, Abubakr
    Mejri, Mohamed
    Jaafar, Fehmi
    2023 IEEE INTERNATIONAL CONFERENCE ON CYBER SECURITY AND RESILIENCE, CSR, 2023, : 160 - 167
  • [10] Social Media Hate Speech Detection Using Explainable Artificial Intelligence (XAI)
    Mehta, Harshkumar
    Passi, Kalpdrum
    ALGORITHMS, 2022, 15 (08)