A Privacy-Preserving Federated Learning With a Feature of Detecting Forged and Duplicated Gradient Model in Autonomous Vehicle

被引:1
作者
Alamer, Abdulrahman [1 ]
Basudan, Sultan [1 ]
机构
[1] Jazan Univ, Comp Sci Dept, Jazan 45142, Saudi Arabia
关键词
Computational modeling; Privacy; Data privacy; Analytical models; Servers; Feature extraction; Accuracy; Homomorphic encryption; Federated learning; Protection; privacy-preserving; forged model; autonomous vehicle;
D O I
10.1109/ACCESS.2025.3545786
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Recently, autonomous vehicular (AV) applications have garnered significant attention regarding the privacy of collected sensing data. Consequently, federated learning (FL) has emerged as a prominent solution to address the privacy concerns associated with sharing original data. This work introduces FL into the AV domain, enabling vehicles to train models locally and provide local updates to a server instead of transmitting raw datasets, thereby preserving data privacy from potential exposure by a malicious centralized cloud server (CC-server). However, a malicious CC-server may still infer the content of the AV's dataset by analyzing the uploaded gradient models. To counter this, various privacy-preserving federated learning (PPFL) approaches have been proposed to prevent a malicious CC-server from executing revealing or forgery attacks on the uploaded gradient models. Nonetheless, the PPFL framework presents an opportunity for a malicious AV to conduct poisoning attacks on the global model by uploading forged or duplicated trained models to the CC-server. This paper proposes an Efficient Privacy-Preserving Federated Learning framework with the capability of detecting forged and duplicated models (PPFL-DP), which identifies forged uploaded gradient models. Specifically, we design a threshold $(\varepsilon)$ algorithm based on Euclidean distance to serve as an effective method for detecting poisoning attacks. Furthermore, to ensure the privacy preservation of the $(\varepsilon)$ method, we leverage group-oblivious signcryption (GOSC) combined with homomorphic cryptography to develop a sign-homomorphic encryption (SHE) protocol. This protocol effectively preserves privacy in FL while being compatible with the threshold $(\varepsilon)$ design. The proposed SHE protocol achieves both privacy preservation and the detection of poisoning attacks in FL simultaneously. Through experimental results and theoretical analysis, the proposed SHE protocol demonstrates high efficiency in protecting client privacy while maintaining a high accuracy in detecting forged uploaded models, without adversely affecting the quality of the global model.
引用
收藏
页码:38484 / 38501
页数:18
相关论文
共 42 条
[1]   Artificial Intelligence and Software Modeling Approaches in Autonomous Vehicles for Safety Management: A Systematic Review [J].
Abbasi, Shirin ;
Rahmani, Amir Masoud .
INFORMATION, 2023, 14 (10)
[2]   A privacy-preserving federated learning with a secure collaborative for malware detection models using Internet of Things resources [J].
Alamer, Abdulrahman .
INTERNET OF THINGS, 2024, 25
[3]   Trustworthy federated learning model for the internet of robotic things [J].
Basudan, Sultan ;
Alamer, Abdulrahman .
ENTERPRISE INFORMATION SYSTEMS, 2025, 19 (3-4)
[5]   Fedstellar: A Platform for Decentralized Federated Learning [J].
Beltran, Enrique Tomas Martinez ;
Gomez, angel Luis Perales ;
Feng, Chao ;
Sanchez, Pedro Miguel ;
Bernal, Sergio Lopez ;
Bovet, Gerome ;
Perez, Manuel Gil ;
Perez, Gregorio Martinez ;
Celdran, Alberto Huertas .
EXPERT SYSTEMS WITH APPLICATIONS, 2024, 242
[6]   Identity-based encryption from the Weil pairing [J].
Boneh, D ;
Franklin, M .
SIAM JOURNAL ON COMPUTING, 2003, 32 (03) :586-615
[7]   Privacy-Preserving Federated Learning via Functional Encryption, Revisited [J].
Chang, Yansong ;
Zhang, Kai ;
Gong, Junqing ;
Qian, Haifeng .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 :1855-1869
[8]   QP-LDP for Better Global Model Performance in Federated Learning [J].
Chen, Qian ;
Chai, Zheng ;
Wang, Zilong ;
Yan, Haonan ;
Lin, Xiaodong ;
Zhou, Jianying .
IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (15) :25968-25981
[9]   APFed: Anti-Poisoning Attacks in Privacy-Preserving Heterogeneous Federated Learning [J].
Chen, Xiao ;
Yu, Haining ;
Jia, Xiaohua ;
Yu, Xiangzhan .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 :5749-5761
[10]  
Feng X, 2024, IEEE T SERV COMPUT, V17, P1480, DOI [10.1109/TSC.2024.3376255, 10.1145/3704198.3704199]