Leveraging Explainable AI for Actionable Insights in IoT Intrusion Detection

被引:0
|
作者
Gyawali, Sohan [1 ]
Huang, Jiaqi [2 ]
Jiang, Yili [3 ]
机构
[1] East Carolina Univ, Dept Technol Syst, Greenville, NC 27858 USA
[2] Univ Cent Missouri, Dept Comp Sci & Cybersecur, Warrensburg, MO USA
[3] Univ Mississippi, Dept Comp & Informat Sci, University, MS USA
来源
2024 19TH ANNUAL SYSTEM OF SYSTEMS ENGINEERING CONFERENCE, SOSE 2024 | 2024年
关键词
D O I
10.1109/SOSE62659.2024.10620966
中图分类号
T [工业技术];
学科分类号
08 ;
摘要
The rise of IoT networks has heightened the risk of cyber attacks, necessitating the development of robust detection methods. Although deep learning and complex models show promise in identifying sophisticated attacks, they face challenges related to explainability and actionable insights. In this investigation, we explore and contrast various explainable AI techniques, including LIME, SHAP, and counterfactual explanations, that can be used to enhance the explainability of intrusion detection outcomes. Furthermore, we introduce a framework that utilizes counterfactual SHAP to not only provide explanations but also generate actionable insights for guiding appropriate actions or automating intrusion response systems. We validate the effectiveness of various models through meticulous analysis within the CICIoT2023 dataset. Additionally, we perform a comparative evaluation of our proposed framework against previous approaches, demonstrating its ability to produce actionable insights.
引用
收藏
页码:92 / 97
页数:6
相关论文
共 50 条
  • [41] An Explainable LSTM-Based Intrusion Detection System Optimized by Firefly Algorithm for IoT Networks
    Ogunseyi, Taiwo Blessing
    Thiyagarajan, Gogulakrishan
    Sensors, 2025, 25 (07)
  • [42] Explainable Machine Learning for Intrusion Detection
    Bellegdi, Sameh
    Selamat, Ali
    Olatunji, Sunday O.
    Fujita, Hamido
    Krejcar, Ondfrej
    ADVANCES AND TRENDS IN ARTIFICIAL INTELLIGENCE: THEORY AND APPLICATIONS, IEA-AIE 2024, 2024, 14748 : 122 - 134
  • [43] An Explainable Deep Learning Framework for Resilient Intrusion Detection in IoT-Enabled Transportation Networks
    Oseni, Ayodeji
    Moustafa, Nour
    Creech, Gideon
    Sohrabi, Nasrin
    Strelzoff, Andrew
    Tari, Zahir
    Linkov, Igor
    IEEE TRANSACTIONS ON INTELLIGENT TRANSPORTATION SYSTEMS, 2023, 24 (01) : 1000 - 1014
  • [44] Explainable AI for Intrusion Detection Systems: LIME and SHAP Applicability on Multi-Layer Perceptron
    Gaspar, Diogo
    Silva, Paulo
    Silva, Catarina
    IEEE ACCESS, 2024, 12 : 30164 - 30175
  • [45] Leveraging Explainable Artificial Intelligence in Real-Time Cyberattack Identification: Intrusion Detection System Approach
    Larriva-Novo, Xavier
    Sanchez-Zas, Carmen
    Villagra, Victor A.
    Marin-Lopez, Andres
    Berrocal, Julio
    APPLIED SCIENCES-BASEL, 2023, 13 (15):
  • [46] Towards Directive Explanations: Crafting Explainable AI Systems for Actionable Human-AI Interactions
    Bhattacharya, Aditya
    EXTENDED ABSTRACTS OF THE 2024 CHI CONFERENCE ON HUMAN FACTORS IN COMPUTING SYSTEMS, CHI 2024, 2024,
  • [47] Post-Hoc Categorization Based on Explainable AI and Reinforcement Learning for Improved Intrusion Detection
    Larriva-Novo, Xavier
    Miguel, Luis Perez
    Villagra, Victor A.
    alvarez-Campana, Manuel
    Sanchez-Zas, Carmen
    Jover, Oscar
    APPLIED SCIENCES-BASEL, 2024, 14 (24):
  • [48] An Explainable AI-Based Intrusion Detection System for DNS Over HTTPS (DoH) Attacks
    Zebin, Tahmina
    Rezvy, Shahadate
    Luo, Yuan
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 : 2339 - 2349
  • [49] Integrating Explainable AI with Federated Learning for Next-Generation IoT: A comprehensive review and prospective insights
    Dubey, Praveer
    Kumar, Mohit
    COMPUTER SCIENCE REVIEW, 2025, 56
  • [50] Selective Explanations: Leveraging Human Input to Align Explainable AI
    Lai V.
    Zhang Y.
    Chen C.
    Liao Q.V.
    Tan C.
    Proceedings of the ACM on Human-Computer Interaction, 2023, 7 (CSCW2)