Explainable Artificial Intelligence for Resilient Security Applications in the Internet of Things

被引:0
作者
Masud, Mohammed Tanvir [1 ]
Keshk, Marwa [2 ]
Moustafa, Nour [1 ]
Linkov, Igor [3 ]
Emge, Darren K. [4 ]
机构
[1] University of New South Wales, School of Systems and Computing, Canberra,NSW,2610, Australia
[2] University of New South Wales, School of Professional Studies, Canberra,NSW,2610, Australia
[3] U.S. Army Engineer Research and Development Center, Environmental Laboratory, Vicksburg,MS,39180, United States
[4] Russell Offices, U.S. Army Futures Command Indo-Pacific, Canberra,ACT,2600, Australia
来源
IEEE Open Journal of the Communications Society | 2025年 / 6卷
关键词
Computer crime - Cybersecurity - Deep learning - Internet of things - Learning algorithms - Taxonomies;
D O I
暂无
中图分类号
学科分类号
摘要
The performance of Artificial Intelligence (AI) systems reaches or even exceeds that of humans in an increasing number of complicated tasks. Highly effective non-linear AI models are generally employed in a black-box form nested in their complex structures, which means that no information as to what precisely helps them reach appropriate predictions is provided. The lack of transparency and interpretability in existing Artificial Intelligence techniques would reduce human users’ trust in the models used for cyber defence, especially in current scenarios where cyber resilience is becoming increasingly diverse and challenging. Explainable AI (XAI) should be incorporated into developing cybersecurity models to deliver explainable models with high accuracy that human users can understand, trust, and manage. This paper explores the following concepts related to XAI. A summary of current literature on XAI is discussed. Recent taxonomies that help explain different machine learning algorithms are discussed. These include deep learning techniques developed and studied extensively in other IoT taxonomies. The outputs of AI models are crucial for cybersecurity, as experts require more than simple binary outputs for examination to enable the cyber resilience of IoT systems. Examining the available XAI applications and safety-related threat models to explain resilience towards IoT systems also summarises the difficulties and gaps in XAI concerning cybersecurity. Finally, various technical issues and trends are explained, and future studies on technology, applications, security, and privacy are presented, emphasizing the ideas of explainable AI models. © 2020 IEEE.
引用
收藏
页码:2877 / 2906
相关论文
empty
未找到相关数据