Copyright protection framework for federated learning models against collusion attacks

被引:2
作者
Luo, Yuling [1 ,2 ]
Li, Yuanze [1 ,2 ]
Qin, Sheng [1 ,2 ]
Fu, Qiang [1 ,2 ]
Liu, Junxiu [1 ,2 ]
机构
[1] Guangxi Normal Univ, Sch Elect & Informat Engn, Guangxi Key Lab Brain Inspired Comp & Intelligent, Guilin 541004, Peoples R China
[2] Guangxi Normal Univ, Key Lab Nonlinear Circuits & Opt Commun, Educ Dept Guangxi Zhuang Autonomous Reg, Guilin 541004, Peoples R China
关键词
Federated learning; Anti-collusion coding; Intellectual property protection; Watermark;
D O I
10.1016/j.ins.2024.121161
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Federated learning (FL) models are constructed by multiple participants who provide their training datasets and collaborate in joint training. However, training and deployment processes have encountered various challenges in terms of intellectual property protection, such as illegal theft and data leakage. Existing FL protection frameworks focus on each client independently and verify the model ownership. When they are under collusion attacks (i.e., when multiple clients negotiate to steal together), they cannot accurately identify the clients that have stolen the model. To address this challenge, a novel watermarking protection scheme against collusion attacks for federated learning is proposed in this work. It employs anti-collusion coding to design unique watermark information for each client, which can effectively detect colluders. Furthermore, it utilizes a specific regularized loss function for watermark information embedding along with the incorporation of skip connections to embed the watermark information within each batch normalization layer. The experimental results demonstrated that embedding different watermark information into each client did not affect the accuracy of the original task. The accuracy was approximately 100% when identifying the colluders. The embedding and extraction times for the original task were only 1.53% and 0.29%, respectively. Further, it exhibited high robustness against various common attacks, including fine-tuning, shearing, and collusion attacks.
引用
收藏
页数:17
相关论文
共 46 条
[1]   Deep Learning with Differential Privacy [J].
Abadi, Martin ;
Chu, Andy ;
Goodfellow, Ian ;
McMahan, H. Brendan ;
Mironov, Ilya ;
Talwar, Kunal ;
Zhang, Li .
CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, :308-318
[2]  
Adi Y, 2018, PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, P1615
[3]   Benchmarking robustness and privacy-preserving methods in federated learning [J].
Alebouyeh, Zeinab ;
Bidgoly, Amir Jalaly .
FUTURE GENERATION COMPUTER SYSTEMS-THE INTERNATIONAL JOURNAL OF ESCIENCE, 2024, 155 :18-38
[4]   Practical Secure Aggregation for Privacy-Preserving Machine Learning [J].
Bonawitz, Keith ;
Ivanov, Vladimir ;
Kreuter, Ben ;
Marcedone, Antonio ;
McMahan, H. Brendan ;
Patel, Sarvar ;
Ramage, Daniel ;
Segal, Aaron ;
Seth, Karn .
CCS'17: PROCEEDINGS OF THE 2017 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2017, :1175-1191
[5]  
Briggs C., 2021, Federated Learning Systems: Towards Next-Generation AI, P21
[6]  
Chan W, 2016, INT CONF ACOUST SPEE, P4960, DOI 10.1109/ICASSP.2016.7472621
[7]  
Chorowski J, 2015, ADV NEUR IN, V28
[8]  
Collobert R, 2011, J MACH LEARN RES, V12, P2493
[9]   Privacy-Preserving and Byzantine-Robust Federated Learning [J].
Dong, Caiqin ;
Weng, Jian ;
Li, Ming ;
Liu, Jia-Nan ;
Liu, Zhiquan ;
Cheng, Yudan ;
Yu, Shui .
IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2024, 21 (02) :889-904
[10]   VCFL: A verifiable and collusion attack resistant privacy preserving framework for cross-silo federated learning [J].
Du, Weidong ;
Li, Min ;
Yang, Xiaoyuan ;
Wu, Liqiang ;
Zhou, Tanping .
PERVASIVE AND MOBILE COMPUTING, 2022, 86