On Using Explainable Artificial Intelligence for Failure Identification in Microwave Networks

被引:9
作者
Ayoub, Omran [1 ]
Musumeci, Francesco [1 ]
Ezzeddine, Fatima [3 ]
Passera, Claudio [2 ]
Tornatore, Massimo [1 ]
机构
[1] Politecn Milan, Milan, Italy
[2] SIAE Microelettron, Milan, Italy
[3] Lebanese Univ, Beirut, Lebanon
来源
25TH CONFERENCE ON INNOVATION IN CLOUDS, INTERNET AND NETWORKS (ICIN 2022) | 2022年
关键词
D O I
10.1109/ICIN53892.2022.9758095
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Artificial Intelligence (AI) has demonstrated superhuman capabilities in solving a significant number of tasks, leading to widespread industrial adoption. For in-field network-management application, AI-based solutions, however, have often risen skepticism among practitioners as their internal reasoning is not exposed and their decisions cannot be easily explained, preventing humans from trusting and even understanding them. To address this shortcoming, a new area in AI, called Explainable AI (XAI), is attracting the attention of both academic and industrial researchers. XAI is concerned with explaining and interpreting the internal reasoning and the outcome of AI-based models to achieve more trustable and practical deployment. In this work, we investigate the application of XAI for automated failure-cause identification in microwave networks. We first show how existing supervised ML algorithms can be used to solve the problem of failure-cause identification, achieving an accuracy around 94%. Then, we explore the application of well-known XAI frameworks (such as SHapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME)) to address important practical questions rising during the actual deployment of automated failure-cause identification in microwave networks. These questions, if answered, allow for a deeper understanding of the behavior of the ML algorithm adopted. Precisely, we exploit XAI to understand the main reasons leading to ML algorithm's decisions and to explain why the model makes identification errors over specific instances.
引用
收藏
页码:48 / 55
页数:8
相关论文
共 12 条
[1]  
Coenning F, UNDERSTANDING ITU T
[2]  
Dosilovic FK, 2018, 2018 41ST INTERNATIONAL CONVENTION ON INFORMATION AND COMMUNICATION TECHNOLOGY, ELECTRONICS AND MICROELECTRONICS (MIPRO), P210, DOI 10.23919/MIPRO.2018.8400040
[3]   Techniques for Interpretable Machine Learning [J].
Du, Mengnan ;
Li, Ninghao ;
Hu, Xia .
COMMUNICATIONS OF THE ACM, 2020, 63 (01) :68-77
[4]   Explaining Explanations: An Overview of Interpretability of Machine Learning [J].
Gilpin, Leilani H. ;
Bau, David ;
Yuan, Ben Z. ;
Bajwa, Ayesha ;
Specter, Michael ;
Kagal, Lalana .
2018 IEEE 5TH INTERNATIONAL CONFERENCE ON DATA SCIENCE AND ADVANCED ANALYTICS (DSAA), 2018, :80-89
[5]  
Kim B., 2017, ARXIV PREPRINT ARXIV
[6]   Explainable AI: A Review of Machine Learning Interpretability Methods [J].
Linardatos, Pantelis ;
Papastefanopoulos, Vasilis ;
Kotsiantis, Sotiris .
ENTROPY, 2021, 23 (01) :1-45
[7]   The Mythos of Model Interpretability [J].
Lipton, Zachary C. .
COMMUNICATIONS OF THE ACM, 2018, 61 (10) :36-43
[8]  
Lundberg SM, 2017, ADV NEUR IN, V30
[9]   Explanation in artificial intelligence: Insights from the social sciences [J].
Miller, Tim .
ARTIFICIAL INTELLIGENCE, 2019, 267 :1-38
[10]   Supervised and Semi-Supervised Learning for Failure Identification in Microwave Networks [J].
Musumeci, Francesco ;
Magni, Luca ;
Ayoub, Omran ;
Rubino, Roberto ;
Capacchione, Massimiliano ;
Rigamonti, Gabriele ;
Milano, Michele ;
Passera, Claudio ;
Tornatore, Massimo .
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT, 2021, 18 (02) :1934-1945