Exploring Federated Learning Dynamics for Black-and-White-Box DNN Traitor Tracing

被引:0
作者
Rodriguez-Lois, Elena [1 ]
Perez-Gonzalez, Fernando [1 ]
机构
[1] Univ Vigo, Signal Theory & Commun Dept, atlanTT Res Ctr, EE Telecomunicac, Vigo 36310, Spain
来源
2024 2ND INTERNATIONAL CONFERENCE ON FEDERATED LEARNING TECHNOLOGIES AND APPLICATIONS, FLTA | 2024年
关键词
DNN watermarking; fingerprinting; federated learning; traitor tracing; Tardos codes; black-box; white-box;
D O I
10.1109/FLTA63145.2024.10840113
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
As deep learning applications become more prevalent, the need for extensive training examples raises concerns for sensitive, personal, or proprietary data. To overcome this, Federated Learning (FL) enables collaborative model training across distributed data-owners, but it introduces challenges in safeguarding model ownership and identifying the origin in case of a leak. Building upon prior work, this paper explores the adaptation of black-and-white traitor tracing watermarking to FL classifiers, addressing the threat of collusion attacks from different data-owners. This study reveals that leak-resistant whitebox fingerprints can be directly implemented without a significant impact from FL dynamics, while the black-box fingerprints are drastically affected, losing their traitor tracing capabilities. To mitigate this effect, we propose increasing the number of blackbox salient neurons through dropout regularization. Though there are still some open problems to be explored, such as analyzing non-i.i.d. datasets and over-parameterized models, results show that collusion-resistant traitor tracing, identifying all data-owners involved in a suspected leak, is feasible in an FL framework, even in early stages of training.
引用
收藏
页码:282 / 289
页数:8
相关论文
共 26 条
[1]  
Atli B. G., 2020, ABS200807298 CORR
[2]  
Chaabane F, 2013, INT C INFORM ASSUR S, P85, DOI 10.1109/ISIAS.2013.6947738
[3]   DeepMarks: A Secure Fingerprinting Framework for Digital Rights Management of Deep Learning Models [J].
Chen, Huili ;
Rouhani, Bita Darvish ;
Fu, Cheng ;
Zhao, Jishen ;
Koushanfar, Farinaz .
ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, :105-113
[4]   FedRight: An effective model copyright protection for federated learning [J].
Chen, Jinyin ;
Li, Mingjun ;
Cheng, Yao ;
Zheng, Haibin .
COMPUTERS & SECURITY, 2023, 135
[5]   LinkBreaker: Breaking the Backdoor-Trigger Link in DNNs via Neurons Consistency Check [J].
Chen, Zhenzhu ;
Wang, Shang ;
Fu, Anmin ;
Gao, Yansong ;
Yu, Shui ;
Deng, Robert H. .
IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2022, 17 :2000-2014
[6]  
Fang H., 2023, Watermarking in secure federated learning: A verification framework based on client-side backdooring
[7]   Robust Retraining-free GAN Fingerprinting via Personalized Normalization [J].
Fei, Jianwei ;
Xia, Zhihua ;
Tondi, Benedetta ;
Barni, Mauro .
2023 IEEE INTERNATIONAL WORKSHOP ON INFORMATION FORENSICS AND SECURITY, WIFS, 2023,
[8]   Fundamental Technologies in Modern Speech Recognition [J].
Furui, Sadaoki ;
Deng, Li ;
Gales, Mark ;
Ney, Hermann ;
Tokuda, Keiichi .
IEEE SIGNAL PROCESSING MAGAZINE, 2012, 29 (06) :16-17
[9]  
Hua Y., 2019, How to backdoor federated learning
[10]  
Li Bowen, 2022, FedIPR: Ownership verification for federated deep neural network models