Adversarial Attacks Against Binary Similarity Systems

被引:0
|
作者
Capozzi, Gianluca [1 ]
D'elia, Daniele Cono [1 ]
Di Luna, Giuseppe Antonio [1 ]
Querzoni, Leonardo [1 ]
机构
[1] Sapienza Univ Rome, Dept Comp Control & Management Engn Antonio Rubert, I-00185 Rome, Italy
来源
IEEE ACCESS | 2024年 / 12卷
关键词
Closed box; Perturbation methods; Glass box; Malware; Optimization; Vectors; Deep learning; Binary codes; Assembly; Threat modeling; Adversarial attacks; binary analysis; binary code models; binary similarity; black-box attacks; greedy; white-box attacks;
D O I
10.1109/ACCESS.2024.3488204
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Binary analysis has become essential for software inspection and security assessment. As the number of software-driven devices grows, research is shifting towards autonomous solutions using deep learning models. In this context, a hot topic is the binary similarity problem, which involves determining whether two assembly functions originate from the same source code. However, it is unclear how deep learning models for binary similarity behave in an adversarial context. In this paper, we study the resilience of binary similarity models against adversarial examples, showing that they are susceptible to both targeted and untargeted (w.r.t. similarity goals) attacks performed by black-box and white-box attackers. We extensively test three state-of-the-art binary similarity solutions against (i) a black-box greedy attack that we enrich with a new search heuristic, terming it Spatial Greedy, and (ii) a white-box attack in which we repurpose a gradient-guided strategy used in attacks to image classifiers. Interestingly, the target models are more susceptible to black-box attacks than white-box ones, exhibiting greater resilience in the case of targeted attacks.
引用
收藏
页码:161247 / 161269
页数:23
相关论文
共 50 条
  • [1] Binary thresholding defense against adversarial attacks
    Wang, Yutong
    Zhang, Wenwen
    Shen, Tianyu
    Yu, Hui
    Wang, Fei-Yue
    NEUROCOMPUTING, 2021, 445 : 61 - 71
  • [2] ADVERSARIAL ATTACKS AGAINST AUDIO SURVEILLANCE SYSTEMS
    Ntalampiras, Stavros
    European Signal Processing Conference, 2022, 2022-August : 284 - 288
  • [3] ADVERSARIAL ATTACKS AGAINST AUDIO SURVEILLANCE SYSTEMS
    Ntalampiras, Stavros
    2022 30TH EUROPEAN SIGNAL PROCESSING CONFERENCE (EUSIPCO 2022), 2022, : 284 - 288
  • [4] Defending Distributed Systems Against Adversarial Attacks
    Su L.
    Performance Evaluation Review, 2020, 47 (03): : 24 - 27
  • [5] Adversarial Attacks Against IoT Identification Systems
    Kotak, Jaidip
    Elovici, Yuval
    IEEE INTERNET OF THINGS JOURNAL, 2023, 10 (09) : 7868 - 7883
  • [6] Evaluating Robustness Against Adversarial Attacks: A Representational Similarity Analysis Approach
    Liu, Chenyu
    2023 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN, 2023,
  • [7] Defending Against Adversarial Attacks in Speaker Verification Systems
    Chang, Li-Chi
    Chen, Zesheng
    Chen, Chao
    Wang, Guoping
    Bi, Zhuming
    2021 IEEE INTERNATIONAL PERFORMANCE, COMPUTING, AND COMMUNICATIONS CONFERENCE (IPCCC), 2021,
  • [8] Practical Adversarial Attacks Against Speaker Recognition Systems
    Li, Zhuohang
    Shi, Cong
    Xie, Yi
    Liu, Jian
    Yuan, Bo
    Chen, Yingying
    PROCEEDINGS OF THE 21ST INTERNATIONAL WORKSHOP ON MOBILE COMPUTING SYSTEMS AND APPLICATIONS (HOTMOBILE'20), 2020, : 9 - 14
  • [9] Securing Malware Cognitive Systems against Adversarial Attacks
    Ti, Yuede
    Bowman, Benjamin
    Huang, H. Howie
    2019 IEEE INTERNATIONAL CONFERENCE ON COGNITIVE COMPUTING (IEEE ICCC 2019), 2019, : 1 - 9
  • [10] Defending against adversarial attacks on graph neural networks via similarity property
    Yao, Minghong
    Yu, Haizheng
    Bian, Hong
    AI COMMUNICATIONS, 2023, 36 (01) : 27 - 39