DeepPayload: Black-box Backdoor Attack on Deep Learning Models through Neural Payload Injection

被引:56
作者
Li, Yuanchun [1 ]
Hua, Liayi [2 ]
Wang, Haoyu [2 ]
Chen, Chunyang [3 ]
Liu, Yunxin [1 ]
机构
[1] Microsoft Res, Beijing, Peoples R China
[2] Beijing Univ Posts & Telecommun, Beijing, Peoples R China
[3] Monash Univ, Melbourne, Vic, Australia
来源
2021 IEEE/ACM 43RD INTERNATIONAL CONFERENCE ON SOFTWARE ENGINEERING (ICSE 2021) | 2021年
基金
中国国家自然科学基金;
关键词
Deep learning; backdoor attack; reverse engineering; malicious payload; mobile application;
D O I
10.1109/ICSE43902.2021.00035
中图分类号
TP31 [计算机软件];
学科分类号
081202 ; 0835 ;
摘要
Deep learning models are increasingly used in mobile applications as critical components. Unlike the program bytecode whose vulnerabilities and threats have been widely-discussed, whether and how the deep learning models deployed in the applications can be compromised are not well-understood since neural networks are usually viewed as a black box. In this paper, we introduce a highly practical backdoor attack achieved with a set of reverse-engineering techniques over compiled deep learning models. The core of the attack is a neural conditional branch constructed with a trigger detector and several operators and injected into the victim model as a malicious payload. The attack is effective as the conditional logic can be flexibly customized by the attacker, and scalable as it does not require any prior knowledge from the original model. We evaluated the attack effectiveness using 5 state-of-the-art deep learning models and real-world samples collected from 30 users. The results demonstrated that the injected backdoor can be triggered with a success rate of 93.5%, while only brought less than 2ms latency overhead and no more than 1.4% accuracy decrease. We further conducted an empirical study on real-world mobile deep learning apps collected from Google Play. We found 54 apps that were vulnerable to our attack, including popular and security-critical ones. The results call for the awareness of deep learning application developers and auditors to enhance the protection of deployed models.
引用
收藏
页码:263 / 274
页数:12
相关论文
共 49 条
[1]  
Abadi M, 2016, PROCEEDINGS OF OSDI'16: 12TH USENIX SYMPOSIUM ON OPERATING SYSTEMS DESIGN AND IMPLEMENTATION, P265
[2]  
Alahi A, 2012, PROC CVPR IEEE, P510, DOI 10.1109/CVPR.2012.6247715
[3]  
Alfeld S, 2016, AAAI CONF ARTIF INTE, P1452
[4]   Real-Time Video Analytics: The Killer App for Edge Computing [J].
Ananthanarayanan, Ganesh ;
Bahl, Paramvir ;
Bodik, Peter ;
Chintalapudi, Krishna ;
Philipose, Matthai ;
Ravindranath, Lenin ;
Sinha, Sudipta .
COMPUTER, 2017, 50 (10) :58-67
[5]  
[Anonymous], 2004, P C E MAIL ANTISPAM
[6]  
Araujo A., 2019, DISTILL
[7]  
Chen HL, 2019, PROCEEDINGS OF THE TWENTY-EIGHTH INTERNATIONAL JOINT CONFERENCE ON ARTIFICIAL INTELLIGENCE, P4658
[8]  
Chen X., 2017, CORR
[9]   A Two Stream Siamese Convolutional Neural Network For Person Re-Identification [J].
Chung, Dahjung ;
Tahboub, Khalid ;
Delp, Edward J. .
2017 IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), 2017, :1992-2000
[10]  
Chung SP, 2006, LECT NOTES COMPUT SC, V4219, P61