AFA: Adversarial fingerprinting authentication for deep neural networks

被引:35
|
作者
Zhao, Jingjing [1 ]
Hu, Qingyue [2 ]
Liu, Gaoyang [2 ]
Ma, Xiaoqiang [2 ]
Chen, Fei [3 ]
Hassan, Mohammad Mehedi [4 ]
机构
[1] Hubei Univ, Sch Comp Sci & Informat Engn, Wuhan 430062, Peoples R China
[2] Huazhong Univ Sci & Technol, Sch Elect Informat & Commun, Wuhan 430074, Peoples R China
[3] Qingdao Univ, Coll Comp Sci & Technol, Qingdao 266071, Peoples R China
[4] King Saud Univ, Coll Comp & Informat Sci, Riyadh 11543, Saudi Arabia
关键词
DNN fingerprinting; IP verification; Adversarial examples; 5G mobile services;
D O I
10.1016/j.comcom.2019.12.016
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
With the vigorous development of deep learning, sharing trained deep neural network (DNN) models has become a common trend in various fields. An urgent problem is to protect the intellectual property (IP) rights of the model owners and detect IP infringement. DNN watermarking technology, which embeds signature information into the protected model and tries to extract it from the plagiarism model, has been the main approach of IP verification. However, the existing DNN watermarking methods have to be robust to various removal attacks since their watermarks are single in form or limited in quantity. Meanwhile, the process of adding watermarks to the DNN models will affect their original prediction abilities. Moreover, if the model has been distributed before embedding the watermarks, its IP cannot be correctly recognized and protected. To this end, we propose AFA, a new DNN fingerprinting technology aiming at extracting the inherent features of the model itself instead of embedding fixed watermarks. The features we selected as model fingerprints are a set of specially-crafted adversarial examples called Adversarial-Marks, which can transfer much better to the models that are derived from the original model than to other irrelative models. We also design a new IP verification scheme to identify a remote model's ownership. Experimental results show that our mechanism works well for common image classification models, and it can be easily adapted to other deep neural networks.
引用
收藏
页码:488 / 497
页数:10
相关论文
共 50 条
  • [1] Fingerprinting Deep Neural Networks Globally via Universal Adversarial Perturbations
    Peng, Zirui
    Li, Shaofeng
    Chen, Guoxing
    Zhang, Cheng
    Zhu, Haojin
    Xue, Minhui
    2022 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2022, : 13420 - 13429
  • [2] A Deep Neural Network Fingerprinting Detection Method Based on Active Learning of Generative Adversarial Networks
    Gua, Xiaohui
    He, Niannian
    Sun, Xinxin
    2024 5TH INTERNATIONAL CONFERENCE ON COMPUTER ENGINEERING AND APPLICATION, ICCEA 2024, 2024, : 248 - 252
  • [3] DeepTaster: Adversarial Perturbation-Based Fingerprinting to Identify Proprietary Dataset Use in Deep Neural Networks
    Park, Seonhye
    Abuadbba, Alsharif
    Wang, Shuo
    Moore, Kristen
    Gao, Yansong
    Kim, Hyoungshick
    Nepal, Surya
    39TH ANNUAL COMPUTER SECURITY APPLICATIONS CONFERENCE, ACSAC 2023, 2023, : 535 - 549
  • [4] Fingerprinting Deep Neural Networks - A DeepFool Approach
    Wang, Si
    Chang, Chip-Hong
    2021 IEEE INTERNATIONAL SYMPOSIUM ON CIRCUITS AND SYSTEMS (ISCAS), 2021,
  • [5] Adversarial robustness improvement for deep neural networks
    Eleftheriadis, Charis
    Symeonidis, Andreas
    Katsaros, Panagiotis
    MACHINE VISION AND APPLICATIONS, 2024, 35 (03)
  • [6] Robustness of deep neural networks in adversarial examples
    Song, Xiao (songxiao@buaa.edu.cn), 1600, University of Cincinnati (24):
  • [7] Adversarial image detection in deep neural networks
    Carrara, Fabio
    Falchi, Fabrizio
    Caldelli, Roberto
    Amato, Giuseppe
    Becarelli, Rudy
    MULTIMEDIA TOOLS AND APPLICATIONS, 2019, 78 (03) : 2815 - 2835
  • [8] Adversarial robustness improvement for deep neural networks
    Charis Eleftheriadis
    Andreas Symeonidis
    Panagiotis Katsaros
    Machine Vision and Applications, 2024, 35
  • [9] Disrupting adversarial transferability in deep neural networks
    Wiedeman, Christopher
    Wang, Ge
    PATTERNS, 2022, 3 (05):
  • [10] Adversarial image detection in deep neural networks
    Fabio Carrara
    Fabrizio Falchi
    Roberto Caldelli
    Giuseppe Amato
    Rudy Becarelli
    Multimedia Tools and Applications, 2019, 78 : 2815 - 2835