Explainable Artificial Intelligence (XAI) for Internet of Things: A Survey

被引:48
作者
Kok, Ibrahim [1 ]
Okay, Feyza Yildirim [2 ]
Muyanli, Ozgecan [3 ]
Ozdemir, Suat [3 ]
机构
[1] Pamukkale Univ, Dept Comp Engn, TR-20160 Denizli, Turkiye
[2] Gazi Univ, Dept Comp Engn, TR-06560 Ankara, Turkiye
[3] Hacettepe Univ, Dept Comp Engn, TR-06800 Ankara, Turkiye
关键词
Explainability; explainable artificial intelligence (XAI); Internet of Things (IoT); interpretability; interpretable machine learning (IML); INDUSTRY; 4.0; FRAMEWORK; AI;
D O I
10.1109/JIOT.2023.3287678
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Artificial intelligence (AI) and machine learning (ML) are widely employed to make the solutions more accurate and autonomous in many smart and intelligent applications in the Internet of Things (IoT). In these IoT applications, the performance and accuracy of AI/ML models are the main concerns; however, the transparency, interpretability, and responsibility of the models' decisions are often neglected. Moreover, in AI/ML-supported next-generation IoT applications, there is a need for more reliable, transparent, and explainable systems. In particular, regardless of whether the decisions are simple or complex, how the decision is made, which features affect the decision, and their adoption and interpretation by people or experts are crucial issues. Also, people typically perceive unpredictable or opaque AI outcomes with skepticism, which reduces the adoption and proliferation of IoT applications. To that end, explainable AI (XAI) has emerged as a promising research topic that allows ante-hoc and post-hoc functioning and stages of black-box models to be transparent, understandable, and interpretable. In this article, we provide an in-depth and systematic review of recent studies that use XAI models in the scope of the IoT domain. We classify the studies according to their methodology and application areas. Additionally, we highlight the challenges and open issues and provide promising future directions to lead the researchers in future investigations.
引用
收藏
页码:14764 / 14779
页数:16
相关论文
共 118 条
[81]   Risk Assessment for Personalized Health Insurance Based on Real-World Data [J].
Pnevmatikakis, Aristodemos ;
Kanavos, Stathis ;
Matikas, George ;
Kostopoulou, Konstantina ;
Cesario, Alfredo ;
Kyriazakos, Sofoklis .
RISKS, 2021, 9 (03) :1-15
[82]  
Pontiveros M. J., 2020, P 11 INT C INF INT S, P1
[83]   Applying an interpretable machine learning framework to the traffic safety order analysis of expressway exits based on aggregate driving behavior data [J].
Qi, Hang ;
Yao, Ying ;
Zhao, Xiaohua ;
Guo, Jingfeng ;
Zhang, Yunlong ;
Bi, Chaofan .
PHYSICA A-STATISTICAL MECHANICS AND ITS APPLICATIONS, 2022, 597
[84]   Towards Explainable Process Predictions for Industry 4.0 in the DFKI-Smart-Lego-Factory [J].
Rehse, Jana-Rebecca ;
Mehdiyev, Nijat ;
Fettke, Peter .
KUNSTLICHE INTELLIGENZ, 2019, 33 (02) :181-187
[85]  
Ribeiro MT, 2018, AAAI CONF ARTIF INTE, P1527
[86]   IoT-Based Human Fall Detection System [J].
Ribeiro, Osvaldo ;
Gomes, Luis ;
Vale, Zita .
ELECTRONICS, 2022, 11 (04)
[87]  
Rojat T, 2021, Arxiv, DOI arXiv:2104.00950
[88]   Explainable artificial intelligence enhances the ecological interpretability of black-box species distribution models [J].
Ryo, Masahiro ;
Angelov, Boyan ;
Mammola, Stefano ;
Kass, Jamie M. ;
Benito, Blas M. ;
Hartig, Florian .
ECOGRAPHY, 2021, 44 (02) :199-205
[89]   An explainable AI decision-support-system to automate loan underwriting [J].
Sachan, Swati ;
Yang, Jian-Bo ;
Xu, Dong-Ling ;
Benavides, David Eraso ;
Li, Yang .
EXPERT SYSTEMS WITH APPLICATIONS, 2020, 144
[90]   An Ensemble of Deep Recurrent Neural Networks for Detecting IoT Cyber Attacks Using Network Traffic [J].
Saharkhizan, Mahdis ;
Azmoodeh, Amin ;
Dehghantanha, Ali ;
Choo, Kim-Kwang Raymond ;
Parizi, Reza M. .
IEEE INTERNET OF THINGS JOURNAL, 2020, 7 (09) :8852-8859