Can machine learning model with static features be fooled: an adversarial machine learning approach

被引:27
|
作者
Taheri, Rahim [1 ]
Javidan, Reza [1 ]
Shojafar, Mohammad [2 ]
Vinod, P. [2 ]
Conti, Mauro [2 ]
机构
[1] Shiraz Univ Technol, Comp Engn & IT Dept, Shiraz, Iran
[2] Univ Padua, Dept Math, Padua, Italy
来源
CLUSTER COMPUTING-THE JOURNAL OF NETWORKS SOFTWARE TOOLS AND APPLICATIONS | 2020年 / 23卷 / 04期
关键词
Adversarial machine learning; Android malware detection; Poison attacks; Generative adversarial network; Jacobian algorithm; MALWARE DETECTION;
D O I
10.1007/s10586-020-03083-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The widespread adoption of smartphones dramatically increases the risk of attacks and the spread of mobile malware, especially on the Android platform. Machine learning-based solutions have been already used as a tool to supersede signature-based anti-malware systems. However, malware authors leverage features from malicious and legitimate samples to estimate statistical difference in-order to create adversarial examples. Hence, to evaluate the vulnerability of machine learning algorithms in malware detection, we propose five different attack scenarios to perturb malicious applications (apps). By doing this, the classification algorithm inappropriately fits the discriminant function on the set of data points, eventually yielding a higher misclassification rate. Further, to distinguish the adversarial examples from benign samples, we propose two defense mechanisms to counter attacks. To validate our attacks and solutions, we test our model on three different benchmark datasets. We also test our methods using various classifier algorithms and compare them with the state-of-the-art data poisoning method using the Jacobian matrix. Promising results show that generated adversarial samples can evade detection with a very high probability. Additionally, evasive variants generated by our attack models when used to harden the developed anti-malware system improves the detection rate up to 50% when using the generative adversarial network (GAN) method.
引用
收藏
页码:3233 / 3253
页数:21
相关论文
共 50 条
  • [41] Detection of algorithmically-generated domains: An adversarial machine learning approach
    Alaeiyan, Mohammadhadi
    Parsa, Saeed
    Vinod, P.
    Conti, Mauro
    COMPUTER COMMUNICATIONS, 2020, 160 (160) : 661 - 673
  • [42] Evaluation of the dependence of radiomic features on the machine learning model
    Aydin Demircioğlu
    Insights into Imaging, 13
  • [43] Evaluation of the dependence of radiomic features on the machine learning model
    Demircioglu, Aydin
    INSIGHTS INTO IMAGING, 2022, 13 (01)
  • [44] A hybrid approach for adversarial attack detection based on sentiment analysis model using Machine learning
    Amin, Rashid
    Gantassi, Rahma
    Ahmed, Naeem
    Alshehri, Asma Hassan
    Alsubaei, Faisal S.
    Frnda, Jaroslav
    ENGINEERING SCIENCE AND TECHNOLOGY-AN INTERNATIONAL JOURNAL-JESTECH, 2024, 58
  • [45] Can machine learning explain human learning?
    Vahdat, Mehrnoosh
    Oneto, Luca
    Anguita, Davide
    Funk, Mathias
    Rauterberg, Matthias
    NEUROCOMPUTING, 2016, 192 : 14 - 28
  • [46] The Vulnerability of UAVs: An Adversarial Machine Learning Perspective
    Doyle, Michael
    Harguess, Joshua
    Manville, Keith
    Rodriguez, Mikel
    GEOSPATIAL INFORMATICS XI, 2021, 11733
  • [47] An Adversarial Machine Learning Model Against Android Malware Evasion Attacks
    Chen, Lingwei
    Hou, Shifu
    Ye, Yanfang
    Chen, Lifei
    WEB AND BIG DATA, 2017, 10612 : 43 - 55
  • [48] Security Analytics in the Context of Adversarial Machine Learning
    Tygar, Doug
    IWSPA'16: PROCEEDINGS OF THE 2016 ACM INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS, 2016, : 49 - 49
  • [49] A Survey on Adversarial Machine Learning for Cyberspace Defense
    Yu, Zheng-Fei
    Yan, Qiao
    Zhou, Yun
    Zidonghua Xuebao/Acta Automatica Sinica, 2022, 48 (07): : 1625 - 1649
  • [50] A Survey of Adversarial Machine Learning in Cyber Warfare
    Duddu, Vasisht
    DEFENCE SCIENCE JOURNAL, 2018, 68 (04) : 356 - 366