Can machine learning model with static features be fooled: an adversarial machine learning approach

被引:27
|
作者
Taheri, Rahim [1 ]
Javidan, Reza [1 ]
Shojafar, Mohammad [2 ]
Vinod, P. [2 ]
Conti, Mauro [2 ]
机构
[1] Shiraz Univ Technol, Comp Engn & IT Dept, Shiraz, Iran
[2] Univ Padua, Dept Math, Padua, Italy
关键词
Adversarial machine learning; Android malware detection; Poison attacks; Generative adversarial network; Jacobian algorithm; MALWARE DETECTION;
D O I
10.1007/s10586-020-03083-5
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
The widespread adoption of smartphones dramatically increases the risk of attacks and the spread of mobile malware, especially on the Android platform. Machine learning-based solutions have been already used as a tool to supersede signature-based anti-malware systems. However, malware authors leverage features from malicious and legitimate samples to estimate statistical difference in-order to create adversarial examples. Hence, to evaluate the vulnerability of machine learning algorithms in malware detection, we propose five different attack scenarios to perturb malicious applications (apps). By doing this, the classification algorithm inappropriately fits the discriminant function on the set of data points, eventually yielding a higher misclassification rate. Further, to distinguish the adversarial examples from benign samples, we propose two defense mechanisms to counter attacks. To validate our attacks and solutions, we test our model on three different benchmark datasets. We also test our methods using various classifier algorithms and compare them with the state-of-the-art data poisoning method using the Jacobian matrix. Promising results show that generated adversarial samples can evade detection with a very high probability. Additionally, evasive variants generated by our attack models when used to harden the developed anti-malware system improves the detection rate up to 50% when using the generative adversarial network (GAN) method.
引用
收藏
页码:3233 / 3253
页数:21
相关论文
共 50 条
  • [1] Can machine learning model with static features be fooled: an adversarial machine learning approach
    Rahim Taheri
    Reza Javidan
    Mohammad Shojafar
    P. Vinod
    Mauro Conti
    Cluster Computing, 2020, 23 : 3233 - 3253
  • [2] Adversarial Machine Learning
    Tygar, J. D.
    IEEE INTERNET COMPUTING, 2011, 15 (05) : 4 - 6
  • [3] An Adversarial Learning Approach for Machine Prognostic Health Management
    Huang, Yu
    Tang, Yufei
    VanZwieten, James
    Liu, Jianxun
    Xiao, Xiaocong
    2019 INTERNATIONAL CONFERENCE ON HIGH PERFORMANCE BIG DATA AND INTELLIGENT SYSTEMS (HPBD&IS), 2019, : 163 - 168
  • [4] Network Traffic Obfuscation: An Adversarial Machine Learning Approach
    Verma, Gunjan
    Ciftcioglu, Ertugrul
    Sheatsley, Ryan
    Chan, Kevin
    Scott, Lisa
    2018 IEEE MILITARY COMMUNICATIONS CONFERENCE (MILCOM 2018), 2018, : 413 - 418
  • [5] A survey of game theoretic approach for adversarial machine learning
    Zhou, Yan
    Kantarcioglu, Murat
    Xi, Bowei
    WILEY INTERDISCIPLINARY REVIEWS-DATA MINING AND KNOWLEDGE DISCOVERY, 2019, 9 (03)
  • [6] Machine Learning in Adversarial Settings
    McDaniel, Patrick
    Papernot, Nicolas
    Celik, Z. Berkay
    IEEE SECURITY & PRIVACY, 2016, 14 (03) : 68 - 72
  • [7] Quantum adversarial machine learning
    Lu, Sirui
    Duan, Lu-Ming
    Deng, Dong-Ling
    PHYSICAL REVIEW RESEARCH, 2020, 2 (03):
  • [8] Adversarial Machine Learning for Text
    Lee, Daniel
    Verma, Rakesh
    PROCEEDINGS OF THE SIXTH INTERNATIONAL WORKSHOP ON SECURITY AND PRIVACY ANALYTICS (IWSPA'20), 2020, : 33 - 34
  • [9] On the Economics of Adversarial Machine Learning
    Merkle, Florian
    Samsinger, Maximilian
    Schottle, Pascal
    Pevny, Tomas
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 4670 - 4685
  • [10] Machine learning in adversarial environments
    Pavel Laskov
    Richard Lippmann
    Machine Learning, 2010, 81 : 115 - 119