Survey of Copyright Protection Schemes Based on DNN Model

被引:0
|
作者
Fan X. [1 ]
Zhou X. [1 ]
Zhu B. [1 ]
Dong J. [2 ]
Niu J. [3 ]
Wang H. [2 ]
机构
[1] School of Cyberspace Security, Hainan University, Haikou
[2] School of Cyber Engineering, Xidian University, Xi'an
[3] School of Computer Science and Technology, Xidian University, Xi'an
关键词
Black box watermarking; Copyright protection; Deep neural network (DNN); Gray box watermarking; Null box watermarking; White box watermarking;
D O I
10.7544/issn1000-1239.20211115
中图分类号
学科分类号
摘要
Emerging technologies such as the deep neural network (DNN) have been rapidly developed and applied in industrial Internet security with unprecedented performance. However, training a DNN model needs to capture a large number of proprietary data in different scenarios in the target application, to require extensive computing resources, and to adjust the network topology with the assistance of experts to properly train the parameters. As valuable intellectual property, DNN model should be technically protected from illegal reproduction, redistribution or abuse. Inspired by the classical watermarking technologies which protect intellectual property rights related to multimedia content, neural network watermarking is currently the DNN model copyright protection method most concerned by researchers. So far, there is no complete description of the application of neural network watermarking in the protection of intellectual property of DNN models. We investigate the relevant work of CCF recommended journals and conferences in recent five years. From the perspective of watermark embedding and extraction, based on the original classification of white box and black box watermarking, the neural network watermarking is extended to gray box and null box. The white box and black box watermarkings are summarized in details according to their different ideas and various task models, and the performances of the four classifications are compared. Finally, we discuss the future challenges and research directions of neural network watermarking, aiming to provide guidance to further promote such technologies for DNN model copyright protection. © 2022, Science Press. All right reserved.
引用
收藏
页码:953 / 977
页数:24
相关论文
共 116 条
  • [91] Li Meng, Zhong Qi, Zhang Leo Yu, Et al., Protecting the intellectual property of deep neural networks with watermarking: The frequency domain approach, Proc of the IEEE 19th Int Conf on Trust, Security and Privacy in Computing and Communications (TrustCom), pp. 402-409, (2020)
  • [92] Jebreel N M, Domingo-Ferrer J, Sanchez D, Et al., KeyNet: An asymmetric key-style framework for watermarking deep learning models[J/OL], Applied Sciences, (2021)
  • [93] Sun Shichang, Xue Mingfu, Wang Jian, Et al., Protecting the intellectual properties of deep neural networks with an additional class and steganographic images
  • [94] Szyller S, Atli B G, Marchal S, Et al., Dawn: Dynamic adversarial watermarking of neural networks
  • [95] Zhu Renjie, Zhang Xinpeng, Shi Mengte, Et al., Secure neural network watermarking protocol against forging attack, EURASIP Journal on Image and Video Processing, 2020, 1, pp. 1-12, (2020)
  • [96] Li Huiying, Wenger E, Shan S, Et al., Piracy resistant watermarks for deep neural networks
  • [97] Maung M A P, Kiya H., Piracy-resistant DNN watermarking by block-wise image transformation with secret key, Proc of the 2021 ACM Workshop on Information Hiding and Multimedia Security, pp. 159-164, (2021)
  • [98] Aprilpyone M, Kiya H., Block-wise image transformation with secret key for adversarially robust defense, IEEE Transactions on Information Forensics and Security, 16, pp. 2709-2723, (2021)
  • [99] Mao Xiaojiao, Shen Chunhua, Yang Yubin, Image restoration using very deep convolutional encoder-decoder networks with symmetric skip connections, Advances in Neural Information Processing Systems, 29, pp. 2802-2810, (2016)
  • [100] Zhang Kai, Zuo Wangmeng, Chen Yunjin, Et al., Beyond a gaussian denoiser: Residual learning of deep cnn for image denoising, IEEE Transactions on Image Processing, 26, 7, pp. 3142-3155, (2017)