Federated Learning of Explainable AI Models in 6G Systems: Towards Secure and Automated Vehicle Networking

被引:44
作者
Renda, Alessandro [1 ]
Ducange, Pietro [1 ]
Marcelloni, Francesco [1 ]
Sabella, Dario [2 ]
Filippou, Miltiadis C. [3 ]
Nardini, Giovanni [1 ]
Stea, Giovanni [1 ]
Virdis, Antonio [1 ]
Micheli, Davide [4 ]
Rapone, Damiano [4 ]
Baltar, Leonardo Gomes [3 ]
机构
[1] Univ Pisa, Dept Informat Engn, I-56122 Pisa, Italy
[2] Intel Corp Italia SpA, I-20094 Milan, Italy
[3] Intel Deutschland GmbH, D-85579 Neubiberg, Germany
[4] Telecom Italia Spa, I-00198 Rome, Italy
关键词
explainable artificial intelligence; federated learning; 6G; vehicle-to-everything (V2X); quality of service; quality of experience;
D O I
10.3390/info13080395
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
This article presents the concept of federated learning (FL) of eXplainable Artificial Intelligence (XAI) models as an enabling technology in advanced 5G towards 6G systems and discusses its applicability to the automated vehicle networking use case. Although the FL of neural networks has been widely investigated exploiting variants of stochastic gradient descent as the optimization method, it has not yet been adequately studied in the context of inherently explainable models. On the one side, XAI permits improving user experience of the offered communication services by helping end users trust (by design) that in-network AI functionality issues appropriate action recommendations. On the other side, FL ensures security and privacy of both vehicular and user data across the whole system. These desiderata are often ignored in existing AI-based solutions for wireless network planning, design and operation. In this perspective, the article provides a detailed description of relevant 6G use cases, with a focus on vehicle-to-everything (V2X) environments: we describe a framework to evaluate the proposed approach involving online training based on real data from live networks. FL of XAI models is expected to bring benefits as a methodology for achieving seamless availability of decentralized, lightweight and communication efficient intelligence. Impacts of the proposed approach (including standardization perspectives) consist in a better trustworthiness of operations, e.g., via explainability of quality of experience (QoE) predictions, along with security and privacy-preserving management of data from sensors, terminals, users and applications.
引用
收藏
页数:14
相关论文
共 25 条
[1]  
5GAA, 5GAA TECHN REP TEL O
[2]  
5GAA Working Item MEC4AUTO, TECHNICAL REPORT USE
[3]  
[Anonymous], 2019, CISC VIS NETW IND GL
[4]  
[Anonymous], 2019, Digital Economy and Society Index Report 2019. P, P6
[5]  
[Anonymous], HEXA X DELIVERABLE D
[6]   Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI [J].
Barredo Arrieta, Alejandro ;
Diaz-Rodriguez, Natalia ;
Del Ser, Javier ;
Bennetot, Adrien ;
Tabik, Siham ;
Barbado, Alberto ;
Garcia, Salvador ;
Gil-Lopez, Sergio ;
Molina, Daniel ;
Benjamins, Richard ;
Chatila, Raja ;
Herrera, Francisco .
INFORMATION FUSION, 2020, 58 :82-115
[7]  
Corcuera Barcena J.L., 2022, P IEEE WCCI 2022 WOR
[8]  
Corcuera Barcena JL, 2022, 1 INT WORKSH ART INT, P18
[9]   Velocity-Free Localization of Autonomous Driverless Vehicles in Underground Intelligent Mines [J].
Dong, Longjun ;
Sun, Daoyuan ;
Han, Guangjie ;
Li, Xibing ;
Hu, Qingchun ;
Shu, Lei .
IEEE TRANSACTIONS ON VEHICULAR TECHNOLOGY, 2020, 69 (09) :9292-9303
[10]  
Elbir A. M., 2020, ARXIV