Verifying Integrity of Deep Ensemble Models by Lossless Black-box Watermarking with Sensitive Samples

被引:0
作者
Lin, Lina [1 ]
Wu, Hanzhou [1 ]
机构
[1] Shanghai Univ, Sch Commun & Informat Engn, Shanghai 200444, Peoples R China
来源
2022 10TH INTERNATIONAL SYMPOSIUM ON DIGITAL FORENSICS AND SECURITY (ISDFS) | 2022年
基金
中国国家自然科学基金;
关键词
Watermarking; deep neural networks; fingerprinting; integrity; fragile; black-box; lossless; reversible;
D O I
10.1109/ISDFS55398.2022.9800818
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
With the widespread use of deep neural networks (DNNs) in many areas, more and more studies focus on protecting DNN models from intellectual property (IP) infringement. Many existing methods apply digital watermarking to protect the DNN models. The majority of them either embed a watermark directly into the internal network structure/parameters or insert a zero-bit watermark by fine-tuning a model to be protected with a set of so-called trigger samples. Though these methods work very well, they were designed for individual DNN models, which cannot be directly applied to deep ensemble models (DEMs) that combine multiple DNN models to make the final decision. It motivates us to propose a novel black-box watermarking method in this paper for DEMs, which can be used for verifying the integrity of DEMs. In the proposed method, a certain number of sensitive samples are carefully selected through mimicking real-world DEM attacks and analyzing the prediction results of the sub-models of the non-attacked DEM and the attacked DEM on the carefully crafted dataset. By analyzing the prediction results of the target DEM on these carefully crafted sensitive samples, we are able to verify the integrity of the target DEM. Different from many previous methods, the proposed method does not modify the original DEM to be protected, which indicates that the proposed method is lossless. Experimental results have shown that the DEM integrity can be reliably verified even if only one sub-model was attacked, which has good potential in practice.
引用
收藏
页数:6
相关论文
共 21 条
[1]  
Adi Y, 2018, PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, P1615
[2]  
[Anonymous], 2015, 3 INT C LEARN REPR I
[3]  
Chaubey A, 2020, Arxiv, DOI arXiv:2005.08087
[4]   DeepMarks: A Secure Fingerprinting Framework for Digital Rights Management of Deep Learning Models [J].
Chen, Huili ;
Rouhani, Bita Darvish ;
Fu, Cheng ;
Zhao, Jishen ;
Koushanfar, Farinaz .
ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, :105-113
[5]  
Fan L., 2019, P NEUR INF PROC SYST
[6]   Deep Residual Learning for Image Recognition [J].
He, Kaiming ;
Zhang, Xiangyu ;
Ren, Shaoqing ;
Sun, Jian .
2016 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2016, :770-778
[7]  
Hitaj D, 2019, 2019 SIXTH INTERNATIONAL CONFERENCE ON SOFTWARE DEFINED SYSTEMS (SDS), P55, DOI [10.1109/SDS.2019.8768572, 10.1109/sds.2019.8768572]
[8]  
Kingma DP, 2014, ADV NEUR IN, V27
[9]   ImageNet Classification with Deep Convolutional Neural Networks [J].
Krizhevsky, Alex ;
Sutskever, Ilya ;
Hinton, Geoffrey E. .
COMMUNICATIONS OF THE ACM, 2017, 60 (06) :84-90
[10]   Gradient-based learning applied to document recognition [J].
Lecun, Y ;
Bottou, L ;
Bengio, Y ;
Haffner, P .
PROCEEDINGS OF THE IEEE, 1998, 86 (11) :2278-2324