Towards explainable artificial intelligence in optical networks: the use case of lightpath QoT estimation

被引:30
作者
Ayoub, Omran [1 ]
Troia, Sebastian [2 ]
Andreoletti, Davide [1 ]
Bianco, Andrea [3 ]
Tornatore, Massimo [2 ]
Giordano, Silvia [1 ]
Rottondi, Cristina
机构
[1] Scuola Univ Professionale Svizzera Italiana, Lugano, Switzerland
[2] Politecn Milan, Dipartimento Elettron Informaz & Bioingn, Milan, Italy
[3] Politecn Torino, Dipartimento Elettron & Telecomunicazioni, Turin, Italy
关键词
Predictive models; Artificial intelligence; Data models; Cognition; Task analysis; Feature extraction; Analytical models;
D O I
10.1364/JOCN.470812
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Artificial intelligence (AI) and machine learning (ML) continue to demonstrate substantial capabilities in solving a wide range of optical-network-related tasks such as fault management, resource allocation, and lightpath quality of transmission (QoT) estimation. However, the focus of the research community has been centered on ML models' predictive capabilities, neglecting aspects related to models' understanding, i.e., to interpret how the model reasons and makes its predictions. This lack of transparency hinders the understanding of a model's behavior and prevents operators from judging, and hence trusting, the model's decisions. To mitigate the lack of transparency and trust in ML, explainable AI (XAI) frameworks can be leveraged to explain how a model correlates input features to its outputs. In this paper, we focus on the application of XAI to lightpath QoT estimation. In particular, we exploit Shapley additive explanations (SHAP) as the XAI framework. Before presenting our analysis, we provide a brief overview of XAI and SHAP, then discuss the benefits of the application of XAI in networking and survey studies that apply XAI to networking tasks. Then, we model the lightpath QoT estimation problem as a supervised binary classification task to predict whether the value of the bit error rate associated with a lightpath is below or above a reference acceptability threshold and train an ML extreme gradient boosting model as the classifier. Finally, we demonstrate how to apply SHAP to extract insights about the model and to inspect misclassifications. (C) 2022 Optical Publishing Group.
引用
收藏
页码:A26 / A38
页数:13
相关论文
共 46 条
[1]   A review of uncertainty quantification in deep learning: Techniques, applications and challenges [J].
Abdar, Moloud ;
Pourpanah, Farhad ;
Hussain, Sadiq ;
Rezazadegan, Dana ;
Liu, Li ;
Ghavamzadeh, Mohammad ;
Fieguth, Paul ;
Cao, Xiaochun ;
Khosravi, Abbas ;
Acharya, U. Rajendra ;
Makarenkov, Vladimir ;
Nahavandi, Saeid .
INFORMATION FUSION, 2021, 76 :243-297
[2]   Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI) [J].
Adadi, Amina ;
Berrada, Mohammed .
IEEE ACCESS, 2018, 6 :52138-52160
[3]   On Using Explainable Artificial Intelligence for Failure Identification in Microwave Networks [J].
Ayoub, Omran ;
Musumeci, Francesco ;
Ezzeddine, Fatima ;
Passera, Claudio ;
Tornatore, Massimo .
25TH CONFERENCE ON INNOVATION IN CLOUDS, INTERNET AND NETWORKS (ICIN 2022), 2022, :48-55
[4]  
Ayoub O, 2022, 2022 OPTICAL FIBER COMMUNICATIONS CONFERENCE AND EXHIBITION (OFC)
[5]  
Baehrens D, 2010, J MACH LEARN RES, V11, P1803
[6]  
Barnard P., 2021, ARXIV
[7]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[8]   ML-assisted QoT estimation: A dataset collection and data visualization for dataset quality evaluation [J].
Bergk G. ;
Shariati B. ;
Safari P. ;
Fischer J.K. .
Journal of Optical Communications and Networking, 2022, 14 (03) :43-55
[9]  
Chakraborty Supriyo, INTERPRETABILITY DEE
[10]   Techniques for Interpretable Machine Learning [J].
Du, Mengnan ;
Li, Ninghao ;
Hu, Xia .
COMMUNICATIONS OF THE ACM, 2020, 63 (01) :68-77