A Stable and Efficient Data-Free Model Attack With Label-Noise Data Generation

被引:0
|
作者
Zhang, Zhixuan [1 ]
Zheng, Xingjian [2 ]
Qing, Linbo [1 ]
Liu, Qi [3 ]
Wang, Pingyu [4 ]
Liu, Yu [4 ]
Liao, Jiyang [4 ]
机构
[1] Sichuan Univ, Sch Cyber Sci & Engn, Chengdu 610207, Peoples R China
[2] Frost Drill Intellectual Software Pte Ltd, Int Plaza, Singapore 079903, Singapore
[3] South China Univ Technol, Sch Future Technol, Guangzhou 511442, Peoples R China
[4] Sichuan Univ, Coll Elect & Informat Engn, Chengdu 610065, Peoples R China
基金
中国国家自然科学基金;
关键词
Training; Closed box; Generators; Data models; Data collection; Adaptation models; Diversity methods; Cloning; Glass box; Computational modeling; Deep neural network; data-free; adversarial examples; closed-box attack;
D O I
10.1109/TIFS.2025.3550066
中图分类号
TP301 [理论、方法];
学科分类号
081202 ;
摘要
The objective of a data-free closed-box adversarial attack is to attack a victim model without using internal information, training datasets or semantically similar substitute datasets. Concerned about stricter attack scenarios, recent studies have tried employing generative networks to synthesize data for training substitute models. Nevertheless, these approaches concurrently encounter challenges associated with unstable training and diminished attack efficiency. In this paper, we propose a novel query-efficient data-free closed-box adversarial attack method. To mitigate unstable training, for the first time, we directly manipulate the intermediate-layer feature of a generator without relying on any substitute models. Specifically, a label noise-based generation module is created to enhance the intra-class patterns by incorporating partial historical information during the learning process. Additionally, we present a feature-disturbed diversity generation method to augment the inter-class distance. Meanwhile, we propose an adaptive intra-class attack strategy to heighten attack capability within a limited query budget. In this strategy, entropy-based distance is utilized to characterize the relative information from model outputs, while positive classes and negative samples are used to enhance low attack efficiency. The comprehensive experiments conducted on six datasets demonstrate the superior performance of our method compared to six state-of-the-art data-free closed-box competitors in both label-only and probability-only attack scenarios. Intriguingly, our method can realize the highest attack success rate on the online Microsoft Azure model under an extremely low query budget. Additionally, the proposed approach not only achieves more stable training but also significantly reduces the query count for a more balanced data generation. Furthermore, our method can maintain the best performance under the existing defense models and a limited query budget.
引用
收藏
页码:3131 / 3145
页数:15
相关论文
共 50 条
  • [31] CDFKD-MFS: Collaborative Data-Free Knowledge Distillation via Multi-Level Feature Sharing
    Hao, Zhiwei
    Luo, Yong
    Wang, Zhi
    Hu, Han
    An, Jianping
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 4262 - 4274
  • [32] Up to Thousands-fold Storage Saving: Towards Efficient Data-Free Distillation of Large-Scale Visual Classifiers
    Ye, Fanfan
    Lu, Bingyi
    Ma, Liang
    Zhong, Qiaoyong
    Xie, Di
    PROCEEDINGS OF THE 31ST ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA, MM 2023, 2023, : 8376 - 8386
  • [33] RED plus plus : Data-Free Pruning of Deep Neural Networks via Input Splitting and Output Merging
    Yvinec, Edouard
    Dapogny, Arnaud
    Cord, Matthieu
    Bailly, Kevin
    IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, 2023, 45 (03) : 3664 - 3676
  • [34] Factors influencing the acceptance and use of a South African data-free job search application
    Mangadi, Tsholofelo
    Petersen, Fazlyn
    SOUTH AFRICAN JOURNAL OF INFORMATION MANAGEMENT, 2024, 26 (01):
  • [35] Reusable generator data-free knowledge distillation with hard loss simulation for image classification
    Sun, Yafeng
    Wang, Xingwang
    Huang, Junhong
    Chen, Shilin
    Hou, Minghui
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 265
  • [36] Filtered Weighted Correction Training Method for Data with Noise Label
    Wang, Yulong
    Hu, Xiaohui
    Jia, Zhe
    PROCEEDINGS OF THE 2ND INTERNATIONAL CONFERENCE ON DEEP LEARNING THEORY AND APPLICATIONS (DELTA), 2021, : 177 - 184
  • [37] An Imperceptible Data Augmentation Based Blackbox Clean-Label Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2023, 70 (12) : 5011 - 5024
  • [38] Deep Learning Enabled Data Offloading With Cyber Attack Detection Model in Mobile Edge Computing Systems
    Gopalakrishnan, T.
    Ruby, D.
    Al-Turjman, Fadi
    Gupta, Deepak
    Pustokhina, Irina V.
    Pustokhin, Denis A.
    Shankar, K.
    IEEE ACCESS, 2020, 8 : 185938 - 185949
  • [39] Explainable and Data-Efficient Deep Learning for Enhanced Attack Detection in IIoT Ecosystem
    Attique, Danish
    Hao, Wang
    Ping, Wang
    Javeed, Danish
    Kumar, Prabhat
    IEEE INTERNET OF THINGS JOURNAL, 2024, 11 (24): : 38976 - 38986
  • [40] Generate and Purify: Efficient Person Data Generation for Re-Identification
    Lu, Jianjie
    Zhang, Weidong
    Yin, Haibing
    IEEE TRANSACTIONS ON MULTIMEDIA, 2022, 24 : 558 - 566