Explainable Artificial Intelligence for Resilient Security Applications in the Internet of Things

被引:0
作者
Masud, Mohammed Tanvir [1 ]
Keshk, Marwa [2 ]
Moustafa, Nour [1 ]
Linkov, Igor [3 ]
Emge, Darren K. [4 ]
机构
[1] Univ New South Wales, Sch Syst & Comp, Canberra, NSW 2610, Australia
[2] Univ New South Wales, Sch Profess Studies, Canberra, NSW 2610, Australia
[3] US Army Engineer Res & Dev Ctr, Environm Lab, Vicksburg, MS USA
[4] US Army Futures Command Indopacific, Russell Off, Canberra, ACT 2600, Australia
来源
IEEE OPEN JOURNAL OF THE COMMUNICATIONS SOCIETY | 2025年 / 6卷
关键词
Artificial intelligence; Internet of Things; Explainable AI; Security; Computer security; Resilience; Computer crime; Explainable artificial intelligence; cyber resilience; cyber defence; threat model; intrusion detection; cyber threat intelligence; INTRUSION DETECTION; CYBER ATTACKS; CYBERSECURITY; ENSEMBLE; THREATS; VISUALIZATION; PROTECTION; FRAMEWORK; NETWORK; MAPS;
D O I
暂无
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
The performance of Artificial Intelligence (AI) systems reaches or even exceeds that of humans in an increasing number of complicated tasks. Highly effective non-linear AI models are generally employed in a black-box form nested in their complex structures, which means that no information as to what precisely helps them reach appropriate predictions is provided. The lack of transparency and interpretability in existing Artificial Intelligence techniques would reduce human users' trust in the models used for cyber defence, especially in current scenarios where cyber resilience is becoming increasingly diverse and challenging. Explainable AI (XAI) should be incorporated into developing cybersecurity models to deliver explainable models with high accuracy that human users can understand, trust, and manage. This paper explores the following concepts related to XAI. A summary of current literature on XAI is discussed. Recent taxonomies that help explain different machine learning algorithms are discussed. These include deep learning techniques developed and studied extensively in other IoT taxonomies. The outputs of AI models are crucial for cybersecurity, as experts require more than simple binary outputs for examination to enable the cyber resilience of IoT systems. Examining the available XAI applications and safety-related threat models to explain resilience towards IoT systems also summarises the difficulties and gaps in XAI concerning cybersecurity. Finally, various technical issues and trends are explained, and future studies on technology, applications, security, and privacy are presented, emphasizing the ideas of explainable AI models.
引用
收藏
页码:2877 / 2906
页数:30
相关论文
共 207 条
[1]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[2]  
Adeyemi I.R., 2016, Front. ICT, V3, P8, DOI [10.3389/fict.2016.00008, DOI 10.3389/FICT.2016.00008]
[3]   From Artificial Intelligence to Explainable Artificial Intelligence in Industry 4.0: A Survey on What, How, and Where [J].
Ahmed, Imran ;
Jeon, Gwanggil ;
Piccialli, Francesco .
IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2022, 18 (08) :5031-5042
[4]   Intelligent mobile malware detection using permission requests and API calls [J].
Alazab, Moutaz ;
Alazab, Mamoun ;
Shalaginov, Andrii ;
Mesleh, Abdelwadood ;
Awajan, Albara .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2020, 107 :509-521
[5]   User-oriented Assessment of Classification Model Understandability [J].
Allahyari, Hiva ;
Lavesson, Niklas .
ELEVENTH SCANDINAVIAN CONFERENCE ON ARTIFICIAL INTELLIGENCE (SCAI 2011), 2011, 227 :11-19
[6]  
Almseidin M, 2017, I S INTELL SYST INFO, P277, DOI 10.1109/SISY.2017.8080566
[7]  
Amarasinghe K, 2018, IEEE IND ELEC, P3262, DOI 10.1109/IECON.2018.8591322
[8]  
Angelini M, 2015, IEEE SYM VIS CYB SEC
[9]  
Angrishi K, 2017, Arxiv, DOI arXiv:1702.03681
[10]  
[Anonymous], 2016, Defense Use Case: Analysis of the Cyber Attack on the Ukrainian Power Grid, V388, P1