Toward Securing Federated Learning Against Poisoning Attacks in Zero Touch B5G Networks

被引:18
作者
Ben Saad, Sabra [1 ]
Brik, Bouziane [2 ]
Ksentini, Adlen [1 ]
机构
[1] EURECOM, Commun Syst Dept, F-06904 Sophia Antipolis, France
[2] Univ Bourgogne, DRIVE EA 1859, F-58000 Nevers, France
来源
IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT | 2023年 / 20卷 / 02期
关键词
Zero touch management (ZSM); 5G and beyond; network slicing; federated learning; poisoning attack; reinforcement and unsupervised learning; dimensional reduction;
D O I
10.1109/TNSM.2023.3278838
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The zero Touch Management (ZSM) concept in 5G and Beyond networks (B5G) aims to automate the management and orchestration of running network slices. This requires heavy usage of advanced deep learning techniques in a closed-loop way to auto-build the suitable decisions, enabling to meet network slices' requirements. In this context, Federated Learning (FL) is playing a vital role in training deep learning models in a collaborative way among thousands of network slice participants while ensuring their privacy and hence network slice isolation. Specifically, running network slices may share only their model parameters with a central entity, e.g., Inter Domain Slice Manager, to aggregate them and build a global model. Thus, the central entity does not directly access the training data. However, FL is vulnerable to poisoning attacks, where an insider participant may upload poisoning updates to the central entity so that it can cause a construction failure of the global model and thus affect its global performance. Therefore, it is crucial to design security means to detect and mitigate such threats. In this paper, we design a novel framework to automatically detect malicious participants in the FL process. In particular, our framework first uses a deep reinforcement algorithm to dynamically select a network slice as a trusted participant, based mainly on its reputation. The selected participant will then be in charge of identifying poisoning model updates by leveraging unsupervised machine learning. We demonstrate the feasibility of our framework on top of a real dataset that we generate using the 5G OpenAirInterface (OAI) platform. Evaluation results show the efficiency of our framework in dealing with poisoning attacks even with the presence of several malicious participants.
引用
收藏
页码:1612 / 1624
页数:13
相关论文
共 20 条
[1]   Adaptive Upgrade of Client Resources for Improving the Quality of Federated Learning Model [J].
AbdulRahman, Sawsan ;
Ould-Slimane, Hakima ;
Chowdhury, Rasel ;
Mourad, Azzam ;
Talhi, Chamseddine ;
Guizani, Mohsen .
IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (05) :4677-4687
[2]  
Bagdasaryan E, 2020, PR MACH LEARN RES, V108, P2938
[3]  
Blanchard P, 2017, ADV NEUR IN, V30
[4]   On the feasibility of Federated Learning towards on-demand client deployment at the edge [J].
Chahoud, Mario ;
Otoum, Safa ;
Mourad, Azzam .
INFORMATION PROCESSING & MANAGEMENT, 2023, 60 (01)
[5]  
Etsi, 2022, ZER TOUCH NETW SERV
[6]  
Fung C, 2020, Arxiv, DOI [arXiv:1808.04866, DOI 10.48550/ARXIV.1808.04866]
[7]  
Gu TY, 2019, Arxiv, DOI arXiv:1708.06733
[8]  
Jaadi Z., 2021, A Step-by-Step Explanation of Principal Component Analysis (PCA)
[9]   Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning [J].
Jagielski, Matthew ;
Oprea, Alina ;
Biggio, Battista ;
Liu, Chang ;
Nita-Rotaru, Cristina ;
Li, Bo .
2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2018, :19-35
[10]  
Kuklinski S, 2019, PROCEEDINGS OF THE 2019 IEEE CONFERENCE ON NETWORK SOFTWARIZATION (NETSOFT 2019), P464, DOI [10.1109/netsoft.2019.8806692, 10.1109/NETSOFT.2019.8806692]