DFDS: Data-Free Dual Substitutes Hard-Label Black-Box Adversarial Attack

被引:0
|
作者
Jiang, Shuliang [1 ]
He, Yusheng [1 ]
Zhang, Rui [1 ]
Kang, Zi [1 ]
Xia, Hui [1 ]
机构
[1] Ocean Univ China, Fac Informat Sci & Engn, Qingdao 266100, Peoples R China
来源
KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2024 | 2024年 / 14886卷
基金
中国国家自然科学基金;
关键词
Deep neural networks; Adversarial attack; White-box/black-box attack; Transfer-based adversarial attacks; Adversarial examples;
D O I
10.1007/978-981-97-5498-4_21
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Transfer-based hard-label black-box adversarial attacks, confront challenges in obtaining pertinent proxy datasets and demanding a substantial query volume to the target model without guaranteeing a high attack success rate. To address the challenges, we introduces the techniques of dual substitute model extraction and embedding space adversarial example search, proposing a novel hard-label black-box adversarial attack approach named Data-Free Dual Substitutes Hard-Label Black-Box Adversarial Attack (DFDS). This approach initially trains a generative adversarial network through adversarial training. This training is achieved without relying on proxy datasets, only depending on the hard-label outputs of the target model. Subsequently, it utilizes natural evolution strategy (NES) to conduct embedding space search for constructing the final adversarial examples. The comprehensive experimental results demonstrate that, under the same query volume, DFDS achieves higher attack success rates compared to baseline methods. In comparison to the state-of-the-art mixed-mechanism hard-label black-box attack approach DFMS-HL, DFDS exhibits significant improvements across the SVHN, CIFAR-10, and CIFAR-100 datasets. Significantly, in the targeted attack scenario on the CIFAR-10 dataset, the success rate reaches 76.59%, representing the highest enhancement of 21.99%.
引用
收藏
页码:274 / 285
页数:12
相关论文
共 50 条
  • [31] An Approximated Gradient Sign Method Using Differential Evolution for Black-Box Adversarial Attack
    Li, Chao
    Wang, Handing
    Zhang, Jun
    Yao, Wen
    Jiang, Tingsong
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2022, 26 (05) : 976 - 990
  • [32] Dual discriminator adversarial distillation for data-free model compression
    Zhao, Haoran
    Sun, Xin
    Dong, Junyu
    Manic, Milos
    Zhou, Huiyu
    Yu, Hui
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2022, 13 (05) : 1213 - 1230
  • [33] Dual discriminator adversarial distillation for data-free model compression
    Haoran Zhao
    Xin Sun
    Junyu Dong
    Milos Manic
    Huiyu Zhou
    Hui Yu
    International Journal of Machine Learning and Cybernetics, 2022, 13 : 1213 - 1230
  • [34] GenDroid: A query-efficient black-box android adversarial attack framework
    Xu, Guangquan
    Shao, Hongfei
    Cui, Jingyi
    Bai, Hongpeng
    Li, Jiliang
    Bai, Guangdong
    Liu, Shaoying
    Meng, Weizhi
    Zheng, Xi
    COMPUTERS & SECURITY, 2023, 132
  • [35] FABRICATE-VANISH: AN EFFECTIVE AND TRANSFERABLE BLACK-BOX ADVERSARIAL ATTACK INCORPORATING FEATURE DISTORTION
    Lu, Yantao
    Du, Xueying
    Sun, Bingkun
    Ren, Haining
    Velipasalar, Senem
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 809 - 813
  • [36] An Optimized Black-Box Adversarial Simulator Attack Based on Meta-Learning
    Chen, Zhiyu
    Ding, Jianyu
    Wu, Fei
    Zhang, Chi
    Sun, Yiming
    Sun, Jing
    Liu, Shangdong
    Ji, Yimu
    ENTROPY, 2022, 24 (10)
  • [37] Superpixel Attack Enhancing Black-Box Adversarial Attack with Image-Driven Division Areas
    Oe, Issa
    Yamamura, Keiichiro
    Ishikura, Hiroki
    Hamahira, Ryo
    Fujisawa, Katsuki
    ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT I, 2024, 14471 : 141 - 152
  • [38] Automatic Selection Attacks Framework for Hard Label Black-Box Models
    Liu, Xiaolei
    Li, Xiaoyu
    Zheng, Desheng
    Bai, Jiayu
    Peng, Yu
    Zhang, Shibin
    IEEE INFOCOM 2022 - IEEE CONFERENCE ON COMPUTER COMMUNICATIONS WORKSHOPS (INFOCOM WKSHPS), 2022,
  • [39] Black-Box Adversarial Attack on Graph Neural Networks With Node Voting Mechanism
    Wen, Liangliang
    Liang, Jiye
    Yao, Kaixuan
    Wang, Zhiqiang
    IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING, 2024, 36 (10) : 5025 - 5038
  • [40] A Black-Box Adversarial Attack via Deep Reinforcement Learning on the Feature Space
    Li, Lyue
    Rezapour, Amir
    Tzeng, Wen-Guey
    2021 IEEE CONFERENCE ON DEPENDABLE AND SECURE COMPUTING (DSC), 2021,