Binary Black-Box Adversarial Attacks with Evolutionary Learning against IoT Malware Detection

被引:0
作者
Wang, Fangwei [1 ,2 ]
Lu, Yuanyuan [1 ]
Wang, Changguang [1 ,2 ]
Li, Qingru [1 ,2 ]
机构
[1] Hebei Normal Univ, Coll Comp & Cyber Secur, Shijiazhuang 050024, Hebei, Peoples R China
[2] Key Lab Network & Informat Secur Hebei Prov, Shijiazhuang 050024, Hebei, Peoples R China
基金
中国国家自然科学基金;
关键词
SIMILARITY;
D O I
10.1155/2021/8736946
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
5G is about to open Pandora's box of security threats to the Internet of Things (IoT). Key technologies, such as network function virtualization and edge computing introduced by the 5G network, bring new security threats and risks to the Internet infrastructure. Therefore, higher detection and defense against malware are required. Nowadays, deep learning (DL) is widely used in malware detection. Recently, research has demonstrated that adversarial attacks have posed a hazard to DL-based models. The key issue of enhancing the antiattack performance of malware detection systems that are used to detect adversarial attacks is to generate effective adversarial samples. However, numerous existing methods to generate adversarial samples are manual feature extraction or using white-box models, which makes it not applicable in the actual scenarios. This paper presents an effective binary manipulation-based attack framework, which generates adversarial samples with an evolutionary learning algorithm. The framework chooses some appropriate action sequences to modify malicious samples. Thus, the modified malware can successfully circumvent the detection system. The evolutionary algorithm can adaptively simplify the modification actions and make the adversarial sample more targeted. Our approach can efficiently generate adversarial samples without human intervention. The generated adversarial samples can effectively combat DL-based malware detection models while preserving the consistency of the executable and malicious behavior of the original malware samples. We apply the generated adversarial samples to attack the detection engines of VirusTotal. Experimental results illustrate that the adversarial samples generated by our method reach an evasion success rate of 47.8%, which outperforms other attack methods. By adding adversarial samples in the training process, the MalConv network is retrained. We show that the detection accuracy is improved by 10.3%.
引用
收藏
页数:9
相关论文
共 42 条
[1]  
Anderson HyrumS., 2018, Learning to evade static pe machine learning malware models via reinforcement learning
[2]   Windows PE Malware Detection Using Ensemble Learning [J].
Azeez, Nureni Ayofe ;
Odufuwa, Oluwanifise Ebunoluwa ;
Misra, Sanjay ;
Oluranti, Jonathan ;
Damasevicius, Robertas .
INFORMATICS-BASEL, 2021, 8 (01)
[3]   Evading Anti-Malware Engines With Deep Reinforcement Learning [J].
Fang, Zhiyang ;
Wang, Junfeng ;
Li, Boya ;
Wu, Siqi ;
Zhou, Yingjie ;
Huang, Haiying .
IEEE ACCESS, 2019, 7 :48867-48879
[4]  
Goodfellow I.J., 2015, 3 INT C LEARN REPR I
[5]  
Grosse K., 2016, ADVERSARIAL PERTURBA
[6]   A Lightweight Cross-Version Binary Code Similarity Detection Based on Similarity and Correlation Coefficient Features [J].
Guo, Hui ;
Huang, Shuguang ;
Huang, Cheng ;
Zhang, Min ;
Pan, Zulie ;
Shi, Fan ;
Huang, Hui ;
Hu, Donghui ;
Wang, Xiaoping .
IEEE ACCESS, 2020, 8 :120501-120512
[7]   An Efficient DenseNet-Based Deep Learning Model for Malware Detection [J].
Hemalatha, Jeyaprakash ;
Roseline, S. Abijah ;
Geetha, Subbiah ;
Kadry, Seifedine ;
Damasevicius, Robertas .
ENTROPY, 2021, 23 (03)
[8]  
Hu W., 2017, GENERATING ADVERSARI
[9]  
Kalash M, 2018, INT CONF NEW TECHNOL
[10]  
Kolosnjaji Bojan, 2016, AI 2016: Advances in Artificial Intelligence. 29th Australasian Joint Conference. Proceedings: LNAI 9992, P137, DOI 10.1007/978-3-319-50127-7_11