Feature-Space Bayesian Adversarial Learning Improved Malware Detector Robustness

被引:0
|
作者
Doan, Bao Gia [1 ]
Yang, Shuiqiao [2 ]
Montague, Paul [4 ]
De Vel, Olivier [3 ]
Abraham, Tamas [4 ]
Camtepe, Seyit [3 ]
Kanhere, Salil S. [2 ]
Abbasnejad, Ehsan [1 ]
Ranasinghe, Damith C. [1 ]
机构
[1] Univ Adelaide, Adelaide, SA, Australia
[2] Univ New South Wales, Kensington, NSW, Australia
[3] CSIRO, Data61, Eveleigh, Australia
[4] Def Sci & Technol Grp, Canberra, ACT, Australia
来源
THIRTY-SEVENTH AAAI CONFERENCE ON ARTIFICIAL INTELLIGENCE, VOL 37 NO 12 | 2023年
关键词
D O I
暂无
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
We present a new algorithm to train a robust malware detector. Malware is a prolific problem and malware detectors are a front-line defense. Modern detectors rely on machine learning algorithms. Now, the adversarial objective is to devise alterations to the malware code to decrease the chance of being detected whilst preserving the functionality and realism of the malware. Adversarial learning is effective in improving robustness but generating functional and realistic adversarial malware samples is non-trivial. Because: i) in contrast to tasks capable of using gradient-based feedback, adversarial learning in a domain without a differentiable mapping function from the problem space (malware code inputs) to the feature space is hard; and ii) it is difficult to ensure the adversarial malware is realistic and functional. This presents a challenge for developing scalable adversarial machine learning algorithms for large datasets at a production or commercial scale to realize robust malware detectors. We propose an alternative; perform adversarial learning in the feature space in contrast to the problem space. We prove the projection of perturbed, yet valid malware, in the problem space into feature space will always be a subset of adversarials generated in the feature space. Hence, by generating a robust network against feature-space adversarial examples, we inherently achieve robustness against problem-space adversarial examples. We formulate a Bayesian adversarial learning objective that captures the distribution of models for improved robustness. To explain the robustness of the Bayesian adversarial learning algorithm, we prove that our learning method bounds the difference between the adversarial risk and empirical risk and improves robustness. We show that Bayesian neural networks (BNNs) achieve state-of-the-art results; especially in the False Positive Rate (FPR) regime. Adversarially trained BNNs achieve state-of-the-art robustness. Notably, adversarially trained BNNs are robust against stronger attacks with larger attack budgets by a margin of up to 15% on a recent production-scale malware dataset of more than 20 million samples. Importantly, our efforts create a benchmark for future defenses in the malware domain.
引用
收藏
页码:14783 / 14791
页数:9
相关论文
共 43 条
  • [1] Adversarial attacks and defenses using feature-space stochasticity
    Ukita, Jumpei
    Ohki, Kenichi
    NEURAL NETWORKS, 2023, 167 : 875 - 889
  • [2] Is It Overkill? Analyzing Feature-Space Concept Drift in Malware Detectors
    Chen, Zhi
    Zhang, Zhenning
    Kan, Zeliang
    Yang, Limin
    Cortellazzi, Jacopo
    Pendlebury, Feargus
    Pierazzi, Fabio
    Cavallaro, Lorenzo
    Wang, Gang
    2023 IEEE SECURITY AND PRIVACY WORKSHOPS, SPW, 2023, : 21 - 28
  • [3] LSGAN-AT: enhancing malware detector robustness against adversarial examples
    Jianhua Wang
    Xiaolin Chang
    Yixiang Wang
    Ricardo J. Rodríguez
    Jianan Zhang
    Cybersecurity, 4
  • [4] LSGAN-AT: enhancing malware detector robustness against adversarial examples
    Wang, Jianhua
    Chang, Xiaolin
    Wang, Yixiang
    Rodriguez, Ricardo J.
    Zhang, Jianan
    CYBERSECURITY, 2021, 4 (01)
  • [5] Improved domain adaptive object detector via adversarial feature learning
    Marnissi, Mohamed Amine
    Fradi, Hajer
    Sahbani, Anis
    Ben Amara, Najoua Essoukri
    COMPUTER VISION AND IMAGE UNDERSTANDING, 2023, 230
  • [6] Efficiently solving the curse of feature-space dimensionality for improved peptide classification
    Negovetic, Mario
    Otovic, Erik
    Kalafatovic, Daniela
    Mausa, Goran
    DIGITAL DISCOVERY, 2024, 3 (06): : 1182 - 1193
  • [7] FASTEN: Fast Ensemble Learning for Improved Adversarial Robustness
    Huang, Lifeng
    Huang, Qiong
    Qiu, Peichao
    Wei, Shuxin
    Gao, Chengying
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2024, 19 : 2565 - 2580
  • [8] Make Split, not Hijack: Preventing Feature-Space Hijacking Attacks in Split Learning
    Khan, Tanveer
    Budzys, Mindaugas
    Michalas, Antonis
    PROCEEDINGS OF THE 29TH ACM SYMPOSIUM ON ACCESS CONTROL MODELS AND TECHNOLOGIES, SACMAT 2024, 2024, : 19 - 30
  • [9] Transfer Learning across Feature-Rich Heterogeneous Feature Spaces via Feature-Space Remapping (FSR)
    Feuz, Kyle D.
    Cook, Diane J.
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2015, 6 (01)
  • [10] Adversarial malware sample generation method based on the prototype of deep learning detector
    Qiao, Yanchen
    Zhang, Weizhe
    Tian, Zhicheng
    Yang, Laurence T.
    Liu, Yang
    Alazab, Mamoun
    COMPUTERS & SECURITY, 2022, 119