AFA: Adversarial fingerprinting authentication for deep neural networks

被引:35
|
作者
Zhao, Jingjing [1 ]
Hu, Qingyue [2 ]
Liu, Gaoyang [2 ]
Ma, Xiaoqiang [2 ]
Chen, Fei [3 ]
Hassan, Mohammad Mehedi [4 ]
机构
[1] Hubei Univ, Sch Comp Sci & Informat Engn, Wuhan 430062, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[3] Qingdao Univ, Coll Comp Sci & Technol, Qingdao 266071, Peoples R China
[4] King Saud Univ, Coll Comp & Informat Sci, Riyadh 11543, Saudi Arabia
关键词
DNN fingerprinting; IP verification; Adversarial examples; 5G mobile services;
D O I
10.1016/j.comcom.2019.12.016
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the vigorous development of deep learning, sharing trained deep neural network (DNN) models has become a common trend in various fields. An urgent problem is to protect the intellectual property (IP) rights of the model owners and detect IP infringement. DNN watermarking technology, which embeds signature information into the protected model and tries to extract it from the plagiarism model, has been the main approach of IP verification. However, the existing DNN watermarking methods have to be robust to various removal attacks since their watermarks are single in form or limited in quantity. Meanwhile, the process of adding watermarks to the DNN models will affect their original prediction abilities. Moreover, if the model has been distributed before embedding the watermarks, its IP cannot be correctly recognized and protected. To this end, we propose AFA, a new DNN fingerprinting technology aiming at extracting the inherent features of the model itself instead of embedding fixed watermarks. The features we selected as model fingerprints are a set of specially-crafted adversarial examples called Adversarial-Marks, which can transfer much better to the models that are derived from the original model than to other irrelative models. We also design a new IP verification scheme to identify a remote model's ownership. Experimental results show that our mechanism works well for common image classification models, and it can be easily adapted to other deep neural networks.
引用
收藏
页码:488 / 497
页数:10
相关论文
共 50 条
  • [21] Generalizing universal adversarial perturbations for deep neural networks
    Zhang, Yanghao
    Ruan, Wenjie
    Wang, Fu
    Huang, Xiaowei
    MACHINE LEARNING, 2023, 112 (05) : 1597 - 1626
  • [22] Adversarial Robustness Guarantees for Random Deep Neural Networks
    De Palma, Giacomo
    Kiani, Bobak T.
    Lloyd, Seth
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 139, 2021, 139
  • [23] Cocktail Universal Adversarial Attack on Deep Neural Networks
    Li, Shaoxin
    Li, Xiaofeng
    Che, Xin
    Li, Xintong
    Zhang, Yong
    Chu, Lingyang
    COMPUTER VISION - ECCV 2024, PT LXV, 2025, 15123 : 396 - 412
  • [24] An Adversarial Approach for Explaining the Predictions of Deep Neural Networks
    Rahnama, Arash
    Tseng, Andrew
    2021 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION WORKSHOPS, CVPRW 2021, 2021, : 3247 - 3256
  • [25] Learning Secured Modulation With Deep Adversarial Neural Networks
    Mohammed, Hesham
    Saha, Dola
    2020 IEEE 92ND VEHICULAR TECHNOLOGY CONFERENCE (VTC2020-FALL), 2020,
  • [26] Towards Proving the Adversarial Robustness of Deep Neural Networks
    Katz, Guy
    Barrett, Clark
    Dill, David L.
    Julian, Kyle
    Kochenderfer, Mykel J.
    ELECTRONIC PROCEEDINGS IN THEORETICAL COMPUTER SCIENCE, 2017, (257): : 19 - 26
  • [27] Interpretability Analysis of Deep Neural Networks With Adversarial Examples
    Dong Y.-P.
    Su H.
    Zhu J.
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (01): : 75 - 86
  • [28] Compound adversarial examples in deep neural networks q
    Li, Yanchun
    Li, Zhetao
    Zeng, Li
    Long, Saiqin
    Huang, Feiran
    Ren, Kui
    INFORMATION SCIENCES, 2022, 613 : 50 - 68
  • [29] Assessing Threat of Adversarial Examples on Deep Neural Networks
    Graese, Abigail
    Rozsa, Andras
    Boult, Terrance E.
    2016 15TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2016), 2016, : 69 - 74
  • [30] Detecting adversarial example attacks to deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Fumarola, Roberta
    Becarelli, Rudy
    PROCEEDINGS OF THE 15TH INTERNATIONAL WORKSHOP ON CONTENT-BASED MULTIMEDIA INDEXING (CBMI), 2017,