Model-Reuse Attacks on Deep Learning Systems

被引:117
作者
Ji, Yujie [1 ]
Zhang, Xinyang [1 ]
Ji, Shouling [2 ,3 ]
Luo, Xiapu [4 ]
Wang, Ting [1 ]
机构
[1] Lehigh Univ, Bethlehem, PA 18015 USA
[2] Zhejiang Univ, Hangzhou, Zhejiang, Peoples R China
[3] Alibaba ZJU Joint Res Inst Frontier Technol, Hangzhou, Zhejiang, Peoples R China
[4] Hong Kong Polytech Univ, Hong Kong, Peoples R China
来源
PROCEEDINGS OF THE 2018 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY (CCS'18) | 2018年
基金
美国国家科学基金会;
关键词
Deep learning systems; Third-party model; Model-reuse attack;
D O I
10.1145/3243734.3243757
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
Many of today's machine learning (ML) systems are built by reusing an array of, often pre-trained, primitive models, each fulfilling distinct functionality (e. g., feature extraction). The increasing use of primitive models significantly simplifies and expedites the development cycles of ML systems. Yet, because most of such models are contributed and maintained by untrusted sources, their lack of standardization or regulation entails profound security implications, about which little is known thus far. In this paper, we demonstrate that malicious primitive models pose immense threats to the security of ML systems. We present a broad class of model-reuse attacks wherein maliciously crafted models trigger host ML systems to misbehave on targeted inputs in a highly predictable manner. By empirically studying four deep learning systems (including both individual and ensemble systems) used in skin cancer screening, speech recognition, face verification, and autonomous steering, we show that such attacks are (i) effective -the host systems misbehave on the targeted inputs as desired by the adversary with high probability, (ii) evasive - the malicious models function indistinguishably from their benign counterparts on non-targeted inputs, (iii) elastic - the malicious models remain effective regardless of various system design choices and tuning strategies, and (iv) easy - the adversary needs little prior knowledge about the data used for system tuning or inference. We provide analytical justification for the effectiveness of model-reuse attacks, which points to the unprecedented complexity of today's primitive models. This issue thus seems fundamental to many ML systems. We further discuss potential countermeasures and their challenges, which lead to several promising research directions.
引用
收藏
页码:349 / 363
页数:15
相关论文
共 63 条
[1]  
[Anonymous], P IEEE C MACH LEARN
[2]  
[Anonymous], AI TRADER TECH VET L
[3]  
[Anonymous], 2015, ARXIV E PRINTS
[4]  
[Anonymous], P ADV NEUR INF PROC
[5]  
[Anonymous], 2012, P IEEE C MACH LEARN
[6]  
[Anonymous], 2015, C COMPUT VIS PATTERN
[7]  
[Anonymous], 2014, 2 INT C LEARNING REP
[8]  
[Anonymous], 2015, P BRIT MACH VIS
[9]  
[Anonymous], P IEEE S SEC PRIV S
[10]  
[Anonymous], P IEEE S P SEC PRIV