On Embedding Backdoor in Malware Detectors Using Machine Learning

被引:3
作者
Sasaki, Shoichiro [1 ]
Hidano, Seira [2 ]
Uchibayashi, Toshihiro [1 ]
Suganuma, Takuo [3 ]
Hiji, Masahiro [4 ]
Kiyomoto, Shinsaku [2 ]
机构
[1] Tohoku Univ, Grad Sch Informat Sci, Sendai, Miyagi, Japan
[2] KDDI Res Inc, Fujimino, Japan
[3] Tohoku Univ, Cybersci Ctr, Sendai, Miyagi, Japan
[4] Tohoku Univ, Grad Sch Econ & Management, Sendai, Miyagi, Japan
来源
2019 17TH INTERNATIONAL CONFERENCE ON PRIVACY, SECURITY AND TRUST (PST) | 2019年
关键词
malware detection; poisoning attack; backdoor; machine learning;
D O I
10.1109/pst47121.2019.8949034
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Researching for malware detection using machine learning is becoming active. However, conventional detection techniques do not consider the impact of attacks on machine learning, which has become complicated in recent years. In this research, we focus on data poisoning attack, which is one of the typical attacks on machine learning, and aim to clarify the influence of attacks on malware detection technology. Data poisoning attack is an attack method that intentionally manipulates the predicted result of a learned model by injecting poisoning data into training data, and by applying this, it is possible to embed a backdoor that induces mis-prediction of only specific input data. In this paper, we first propose an attack framework for backdoor embedding that prevents detection of only specific types of malware by data poisoning attack. Next, we will describe a method to generate poisoning data efficiently while avoiding attack detection by solving the optimization problem. Furthermore, we take malware detection technology using logistic regression and show the effectiveness of the our method through evaluation experiments using two datasets.
引用
收藏
页码:300 / 304
页数:5
相关论文
共 16 条
[1]  
Anderson H. S., 2018, EMBER: an open dataset for training static PE malware machine learning models
[2]  
Biggio B., 2012, 29 INT C MACHINE LEA, P1467
[3]  
Canzanese R, 2015, 2015 10TH INTERNATIONAL CONFERENCE ON MALICIOUS AND UNWANTED SOFTWARE (MALWARE), P21, DOI 10.1109/MALWARE.2015.7413681
[4]  
Chen X., 2017, ARXIV171205526
[5]   Manipulating Machine Learning: Poisoning Attacks and Countermeasures for Regression Learning [J].
Jagielski, Matthew ;
Oprea, Alina ;
Biggio, Battista ;
Liu, Chang ;
Nita-Rotaru, Cristina ;
Li, Bo .
2018 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2018, :19-35
[6]  
Li B, 2016, ADV NEURAL INFORM PR, P1893, DOI DOI 10.5555/3157096.3157308
[7]   Fine-Pruning: Defending Against Backdooring Attacks on Deep Neural Networks [J].
Liu, Kang ;
Dolan-Gavitt, Brendan ;
Garg, Siddharth .
RESEARCH IN ATTACKS, INTRUSIONS, AND DEFENSES, RAID 2018, 2018, 11050 :273-294
[8]   A Survey on Security Threats and Defensive Techniques of Machine Learning: A Data Driven View [J].
Liu, Qiang ;
Li, Pan ;
Zhao, Wentao ;
Cai, Wei ;
Yu, Shui ;
Leung, Victor C. M. .
IEEE ACCESS, 2018, 6 :12103-12117
[9]  
Munoz-Gonzalez Luis, 2017, P 10 ACM WORKSHOP AR, P27
[10]  
Ronen Royi, 2018, CORR