ActiveGuard: An active intellectual property protection technique for deep neural networks by leveraging adversarial examples as users' fingerprints

被引:1
|
作者
Xue, Mingfu [1 ]
Sun, Shichang [1 ]
He, Can [1 ]
Gu, Dujuan [2 ]
Zhang, Yushu [1 ]
Wang, Jian [1 ]
Liu, Weiqiang [3 ]
机构
[1] Nanjing Univ Aeronaut & Astronaut, Coll Comp Sci & Technol, Nanjing, Peoples R China
[2] NSFOCUS Informat Technol CO LTD, Beijing, Peoples R China
[3] Nanjing Univ Aeronaut & Astronaut, Coll Elect & Informat Engn, Nanjing, Peoples R China
基金
中国国家自然科学基金;
关键词
active copyright protection; adversarial examples; authorization control; deep neural networks; users' fingerprints management;
D O I
10.1049/cdt2.12056
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
The intellectual properties (IP) protection of deep neural networks (DNN) models has raised many concerns in recent years. To date, most of the existing works use DNN watermarking to protect the IP of DNN models. However, the DNN watermarking methods can only passively verify the copyright of the model after the DNN model has been pirated, which cannot prevent piracy in the first place. In this paper, an active DNN IP protection technique against DNN piracy, called ActiveGuard, is proposed. ActiveGuard can provide active authorisation control, users' identities management, and ownership verification for DNN models. Specifically, for the first time, ActiveGuard exploits well-crafted rare and specific adversarial examples with specific classes and confidences as users' fingerprints to distinguish authorised users from unauthorised ones. Authorised users can input their fingerprints to the DNN model for identity authentication and then obtain normal usage, while unauthorised users will obtain a very poor model performance. In addition, ActiveGuard enables the model owner to embed a watermark into the weights of the DNN model for ownership verification. Compared to the few existing active DNN IP protection works, ActiveGuard can support both users' identities identification and active authorisation control. Besides, ActiveGuard introduces lower overhead than these existing active protection works. Experimental results show that, for authorised users, the test accuracy of LeNet-5 and Wide Residual Network (WRN) models are 99.15% and 91.46%, respectively, while for unauthorised users, the test accuracy of LeNet-5 and WRN models are only 8.92% and 10%, respectively. Besides, each authorised user can pass the fingerprint authentication with a high success rate (up to 100%). For ownership verification, the embedded watermark can be successfully extracted, while the normal performance of DNN models will not be affected. Furthermore, it is demonstrated that ActiveGuard is robust against model fine-tuning attack, pruning attack, and three types of fingerprint forgery attacks.
引用
收藏
页码:111 / 126
页数:16
相关论文
共 14 条
  • [1] Active intellectual property protection for deep neural networks through stealthy backdoor and users' identities authentication
    Xue, Mingfu
    Sun, Shichang
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    APPLIED INTELLIGENCE, 2022, 52 (14) : 16497 - 16511
  • [2] Active intellectual property protection for deep neural networks through stealthy backdoor and users’ identities authentication
    Mingfu Xue
    Shichang Sun
    Yushu Zhang
    Jian Wang
    Weiqiang Liu
    Applied Intelligence, 2022, 52 : 16497 - 16511
  • [3] Watermarking of Deep Recurrent Neural Network Using Adversarial Examples to Protect Intellectual Property
    Rathi, Pulkit
    Bhadauria, Saumya
    Rathi, Sugandha
    APPLIED ARTIFICIAL INTELLIGENCE, 2022, 36 (01)
  • [4] ROBUSTNESS OF DEEP NEURAL NETWORKS IN ADVERSARIAL EXAMPLES
    Teng, Da
    Song, Xiao m
    Gong, Guanghong
    Han, Liang
    INTERNATIONAL JOURNAL OF INDUSTRIAL ENGINEERING-THEORY APPLICATIONS AND PRACTICE, 2017, 24 (02): : 123 - 133
  • [5] Interpretability Analysis of Deep Neural Networks With Adversarial Examples
    Dong Y.-P.
    Su H.
    Zhu J.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (01): : 75 - 86
  • [6] DeepTrace: A Secure Fingerprinting Framework for Intellectual Property Protection of Deep Neural Networks
    Wang, Runhao
    Kang, Jiexiang
    Yin, Wei
    Wang, Hui
    Sun, Haiying
    Chen, Xiaohong
    Gao, Zhongjie
    Wang, Shuning
    Liu, Jing
    2021 IEEE 20TH INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS (TRUSTCOM 2021), 2021, : 188 - 195
  • [7] Natural Scene Statistics for Detecting Adversarial Examples in Deep Neural Networks
    Kherchouche, Anouar
    Fezza, Sid Ahmed
    Hamidouche, Wassim
    Deforges, Olivier
    2020 IEEE 22ND INTERNATIONAL WORKSHOP ON MULTIMEDIA SIGNAL PROCESSING (MMSP), 2020,
  • [8] AdvParams: An Active DNN Intellectual Property Protection Technique via Adversarial Perturbation Based Parameter Encryption
    Xue, Mingfu
    Wu, Zhiyu
    Zhang, Yushu
    Wang, Jian
    Liu, Weiqiang
    IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTING, 2023, 11 (03) : 664 - 678
  • [9] ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks Against Adversarial Examples
    Choi, Seok-Hwan
    Shin, Jin-Myeong
    Liu, Peng
    Choi, Yoon-Ho
    IEEE ACCESS, 2022, 10 : 33602 - 33615
  • [10] Detecting Adversarial Examples on Deep Neural Networks With Mutual Information Neural Estimation
    Gao, Song
    Wang, Ruxin
    Wang, Xiaoxuan
    Yu, Shui
    Dong, Yunyun
    Yao, Shaowen
    Zhou, Wei
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2023, 20 (06) : 5168 - 5181