DeepDIST: A Black-Box Anti-Collusion Framework for Secure Distribution of Deep Models

被引:3
作者
Cheng, Hang [1 ]
Li, Xibin [2 ]
Wang, Huaxiong [3 ]
Zhang, Xinpeng [4 ]
Liu, Ximeng [2 ]
Wang, Meiqing [1 ]
Li, Fengyong [5 ]
机构
[1] Fuzhou Univ, Sch Math & Stat, Fuzhou 350108, Fujian, Peoples R China
[2] Fuzhou Univ, Coll Comp Sci & Big Data, Fuzhou 350108, Fujian, Peoples R China
[3] Nanyang Technol Univ, Sch Phys & Math Sci, Singapore 639798, Singapore
[4] Fudan Univ, Sch Comp Sci, Shanghai 200433, Peoples R China
[5] Shanghai Univ Elect Power, Coll Comp Sci & Technol, Shanghai 201306, Peoples R China
基金
中国国家自然科学基金;
关键词
Deep neural networks; anti-collusion; digital watermarking; digital fingerprinting; WATERMARKING; NETWORK;
D O I
10.1109/TCSVT.2023.3284914
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Due to enormous computing and storage overhead for well-trained Deep Neural Network (DNN) models, protecting the intellectual property of model owners is a pressing need. As the commercialization of deep models is becoming increasingly popular, the pre-trained models delivered to users may suffer from being illegally copied, redistributed, or abused. In this paper, we propose DeepDIST, the first end-to-end secure DNNs distribution framework in a black-box scenario. Specifically, our framework adopts a dual-level fingerprint (FP) mechanism to provide reliable ownership verification, and proposes two equivalent transformations that can resist collusion attacks, plus a newly designed similarity loss term to improve the security of the transformations. Unlike the existing passive defense schemes that detect colluding participants, we introduce an active defense strategy, namely damaging the performance of the model after the malicious collusion. The extensive experimental results show that DeepDIST can maintain the accuracy of the host DNN after embedding fingerprint conducted for true traitor tracing, and is robust against several popular model modifications. Furthermore, the anti-collusion effect is evaluated on two typical classification tasks (10-class and 100-class), and the proposed DeepDIST can drop the prediction accuracy of the collusion model to 10% and 1% (random guess), respectively.
引用
收藏
页码:97 / 109
页数:13
相关论文
共 46 条
[1]  
Adi Y, 2018, PROCEEDINGS OF THE 27TH USENIX SECURITY SYMPOSIUM, P1615
[2]  
[Anonymous], 2017, Caffe. Model Zoo
[3]   IPGuard: Protecting Intellectual Property of Deep Neural Networks via Fingerprinting the Classification Boundary [J].
Cao, Xiaoyu ;
Jia, Jinyuan ;
Gong, Neil Zhenqiang .
ASIA CCS'21: PROCEEDINGS OF THE 2021 ACM ASIA CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, 2021, :14-25
[4]   Towards Evaluating the Robustness of Neural Networks [J].
Carlini, Nicholas ;
Wagner, David .
2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, :39-57
[5]   Emerging challenges and perspectives in Deep Learning model security: A brief survey [J].
Caviglione, L. ;
Comito, C. ;
Guarascio, M. ;
Manco, G. .
SYSTEMS AND SOFT COMPUTING, 2023, 5
[6]   DeepMarks: A Secure Fingerprinting Framework for Digital Rights Management of Deep Learning Models [J].
Chen, Huili ;
Rouhani, Bita Darvish ;
Fu, Cheng ;
Zhao, Jishen ;
Koushanfar, Farinaz .
ICMR'19: PROCEEDINGS OF THE 2019 ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA RETRIEVAL, 2019, :105-113
[7]  
Chen JL, 2022, P IEEE S SECUR PRIV, P824, DOI [10.1109/SP46214.2022.00059, 10.1109/SP46214.2022.9833747]
[8]  
Conia S, 2021, 2021 CONFERENCE OF THE NORTH AMERICAN CHAPTER OF THE ASSOCIATION FOR COMPUTATIONAL LINGUISTICS: HUMAN LANGUAGE TECHNOLOGIES (NAACL-HLT 2021), P338
[9]  
Devlin J, 2019, Arxiv, DOI [arXiv:1810.04805, DOI 10.48550/ARXIV.1810.04805]
[10]   Temporal Relation Inference Network for Multimodal Speech Emotion Recognition [J].
Dong, Guan-Nan ;
Pun, Chi-Man ;
Zhang, Zheng .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY, 2022, 32 (09) :6472-6485