FedIPR: Ownership Verification for Federated Deep Neural Network Models

被引:29
作者
Li, Bowen [1 ]
Fan, Lixin [2 ]
Gu, Hanlin [2 ]
Li, Jie [1 ]
Yang, Qiang [2 ,3 ]
机构
[1] Shanghai Jiao Tong Univ, Dept Comp Sci & Engn, Shanghai 200240, Peoples R China
[2] WeBank AI Lab, WeBank, Shenzhen 518000, Peoples R China
[3] Hong Kong Univ Sci & Technol, Dept Comp Sci & Engn, Hong Kong, Peoples R China
关键词
Watermarking; Collaborative work; Data models; Computational modeling; Training; Intellectual property; Training data; Model IPR protection; ownership verification; federated learning; model watermarking; backdoor training;
D O I
10.1109/TPAMI.2022.3195956
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Federated learning models are collaboratively developed upon valuable training data owned by multiple parties. During the development and deployment of federated models, they are exposed to risks including illegal copying, re-distribution, misuse and/or free-riding. To address these risks, the ownership verification of federated learning models is a prerequisite that protects federated learning model intellectual property rights (IPR) i.e., FedIPR. We propose a novel federated deep neural network (FedDNN) ownership verification scheme that allows private watermarks to be embedded and verified to claim legitimate IPR of FedDNN models. In the proposed scheme, each client independently verifies the existence of the model watermarks and claims respective ownership of the federated model without disclosing neither private training data nor private watermark information. The effectiveness of embedded watermarks is theoretically justified by the rigorous analysis of conditions under which watermarks can be privately embedded and detected by multiple clients. Moreover, extensive experimental results on computer vision and natural language processing tasks demonstrate that varying bit-length watermarks can be embedded and reliably detected without compromising original model performances. Our watermarking scheme is also resilient to various federated training settings and robust against removal attacks.
引用
收藏
页码:4521 / 4536
页数:16
相关论文
共 52 条
  • [1] Deep Learning with Differential Privacy
    Abadi, Martin
    Chu, Andy
    Goodfellow, Ian
    McMahan, H. Brendan
    Mironov, Ilya
    Talwar, Kunal
    Zhang, Li
    [J]. CCS'16: PROCEEDINGS OF THE 2016 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2016, : 308 - 318
  • [2] Adi Y, 2018, PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, P1615
  • [3] Allen-Zhu Z, 2019, PR MACH LEARN RES, V97
  • [4] REGULAR HYPERGRAPHS, GORDON LEMMA, STEINITZ LEMMA AND INVARIANT-THEORY
    ALON, N
    BERMAN, KA
    [J]. JOURNAL OF COMBINATORIAL THEORY SERIES A, 1986, 43 (01) : 91 - 97
  • [5] Nguyen A, 2015, PROC CVPR IEEE, P427, DOI 10.1109/CVPR.2015.7298640
  • [6] Atli BG, 2021, Arxiv, DOI [arXiv:2008.07298, DOI arXiv:2008.07298.v1]
  • [7] Avent B., 2019, arXiv
  • [8] Bagdasaryan E., 2018, arXiv
  • [9] Blanchard P, 2017, ADV NEUR IN, V30
  • [10] Boenisch F, 2021, Arxiv, DOI arXiv:2009.12153