DFDS: Data-Free Dual Substitutes Hard-Label Black-Box Adversarial Attack

被引:0
|
作者
Jiang, Shuliang [1 ]
He, Yusheng [1 ]
Zhang, Rui [1 ]
Kang, Zi [1 ]
Xia, Hui [1 ]
机构
[1] Ocean Univ China, Fac Informat Sci & Engn, Qingdao 266100, Peoples R China
来源
KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2024 | 2024年 / 14886卷
基金
中国国家自然科学基金;
关键词
Deep neural networks; Adversarial attack; White-box/black-box attack; Transfer-based adversarial attacks; Adversarial examples;
D O I
10.1007/978-981-97-5498-4_21
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer-based hard-label black-box adversarial attacks, confront challenges in obtaining pertinent proxy datasets and demanding a substantial query volume to the target model without guaranteeing a high attack success rate. To address the challenges, we introduces the techniques of dual substitute model extraction and embedding space adversarial example search, proposing a novel hard-label black-box adversarial attack approach named Data-Free Dual Substitutes Hard-Label Black-Box Adversarial Attack (DFDS). This approach initially trains a generative adversarial network through adversarial training. This training is achieved without relying on proxy datasets, only depending on the hard-label outputs of the target model. Subsequently, it utilizes natural evolution strategy (NES) to conduct embedding space search for constructing the final adversarial examples. The comprehensive experimental results demonstrate that, under the same query volume, DFDS achieves higher attack success rates compared to baseline methods. In comparison to the state-of-the-art mixed-mechanism hard-label black-box attack approach DFMS-HL, DFDS exhibits significant improvements across the SVHN, CIFAR-10, and CIFAR-100 datasets. Significantly, in the targeted attack scenario on the CIFAR-10 dataset, the success rate reaches 76.59%, representing the highest enhancement of 21.99%.
引用
收藏
页码:274 / 285
页数:12
相关论文
共 50 条
  • [1] HyGloadAttack: Hard-label black-box textual adversarial attacks via hybrid optimization
    Liu, Zhaorong
    Xiong, Xi
    Li, Yuanyuan
    Yu, Yan
    Lu, Jiazhong
    Zhang, Shuai
    Xiong, Fei
    NEURAL NETWORKS, 2024, 178
  • [2] Semantic-Aware Adaptive Binary Search for Hard-Label Black-Box Attack
    Ma, Yiqing
    Lucke, Kyle
    Xian, Min
    Vakanski, Aleksandar
    COMPUTERS, 2024, 13 (08)
  • [3] Black-Box Dissector: Towards Erasing-Based Hard-Label Model Stealing Attack
    Wang, Yixu
    Li, Jie
    Liu, Hong
    Wang, Yan
    Wu, Yongjian
    Huang, Feiyue
    Ji, Rongrong
    COMPUTER VISION - ECCV 2022, PT V, 2022, 13665 : 192 - 208
  • [4] VIWHard: Text adversarial attacks based on important-word discriminator in the hard-label black-box setting
    Zhang, Hua
    Wang, Jiahui
    Gao, Haoran
    Zhang, Xin
    Wang, Huewei
    Li, Wenmin
    NEUROCOMPUTING, 2025, 616
  • [5] Dynamic Routing and Knowledge Re-Learning for Data-Free Black-Box Attack
    Qian, Xuelin
    Wang, Wenxuan
    Jiang, Yu-Gang
    Xue, Xiangyang
    Fu, Yanwei
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2025, 47 (01) : 486 - 501
  • [6] Query-Efficient Hard-Label Black-Box Attacks Using Biased Sampling
    Liu, Sijia
    Sun, Jian
    Li, Jun
    2020 CHINESE AUTOMATION CONGRESS (CAC 2020), 2020, : 3872 - 3877
  • [7] RayS: A Ray Searching Method for Hard-label Adversarial Attack
    Chen, Jinghui
    Gu, Quanquan
    KDD '20: PROCEEDINGS OF THE 26TH ACM SIGKDD INTERNATIONAL CONFERENCE ON KNOWLEDGE DISCOVERY & DATA MINING, 2020, : 1739 - 1747
  • [8] SIMULATOR ATTACK plus FOR BLACK-BOX ADVERSARIAL ATTACK
    Ji, Yimu
    Ding, Jianyu
    Chen, Zhiyu
    Wu, Fei
    Zhang, Chi
    Sun, Yiming
    Sun, Jing
    Liu, Shangdong
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 636 - 640
  • [9] Dual stage black-box adversarial attack against vision transformer
    Wang, Fan
    Shao, Mingwen
    Meng, Lingzhuang
    Liu, Fukang
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (08) : 3367 - 3378
  • [10] Saliency Attack: Towards Imperceptible Black-box Adversarial Attack
    Dai, Zeyu
    Liu, Shengcai
    Li, Qing
    Tang, Ke
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)