Clean-label attack based on negative afterimage on neural networks

被引:0
|
作者
Zang, Liguang [1 ]
Li, Yuancheng [1 ]
机构
[1] North China Elect Power Univ, Dept Control & Comp Engineeringg, Beijing 102206, Peoples R China
关键词
Clean-label attack; Adversarial attacks; Afterimage; Machine learning;
D O I
10.1007/s13042-024-02230-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks can fool machine learning models by adding small but carefully designed perturbations to an image, resulting in the model to misclassify it into another class. In general, an attacker chooses a target class as a pollution source to generate a perturbed image and tricks the classifier into classifying it as the target class. However, the effectiveness of adversarial attacks suffers from some limitations, including the need for specific control over the labeling of the target class and challenges in generating effective adversarial samples in real-world scenarios. To overcome these limitations, we propose a novel clean-label attack based on negative afterimage. Negative afterimage is a visual phenomenon whereby, after gazing at a bright image for some time, one perceives the inverse color image of the previously observed bright region when shifting focus to a darker area. We explore the afterimage phenomenon and use negative afterimages as pollution sources to generate adversarial samples, which can avoid any dependency on the labeling of pollution sources. Subsequently, we generate adversarial samples using an optimization-based approach to ensure that adversarial samples are visually undetectable. We conducted experiments on two widely used datasets, CIFAR-10 and ImageNet. The results show that our proposed method can effectively generate adversarial samples with high stealthiness.
引用
收藏
页码:449 / 460
页数:12
相关论文
共 50 条
  • [1] An Imperceptible Data Augmentation Based Blackbox Clean-Label Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2023, 70 (12) : 5011 - 5024
  • [2] Poison Frogs! Targeted Clean-Label Poisoning Attacks on Neural Networks
    Shafahi, Ali
    Huang, W. Ronny
    Najibi, Mahyar
    Suciu, Octavian
    Studer, Christoph
    Dumitras, Tudor
    Goldstein, Tom
    ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS 31 (NIPS 2018), 2018, 31
  • [3] Clean-label poisoning attack with perturbation causing dominant features ✩
    Zhang, Chen
    Tang, Zhuo
    Li, Kenli
    INFORMATION SCIENCES, 2023, 644
  • [4] NARCISSUS: A Practical Clean-Label Backdoor Attack with Limited Information
    Zeng, Yi
    Pan, Minzhou
    Just, Hoang Anh
    Lyu, Lingjuan
    Qiu, Meikang
    Jia, Ruoxi
    PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023, 2023, : 771 - 785
  • [5] A Clean-Label Graph Backdoor Attack Method in Node Classification Task
    Xing, Xiaogang
    Xu, Ming
    Bai, Yujing
    Yang, Dongdong
    KNOWLEDGE-BASED SYSTEMS, 2024, 304
  • [6] Clean-label backdoor attack and defense: An examination of language model vulnerability
    Zhao, Shuai
    Xu, Xiaoyu
    Xiao, Luwei
    Wen, Jinming
    Tuan, Luu Anh
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 265
  • [7] Bullseye Polytope: A Scalable Clean-Label Poisoning Attack with Improved Transferability
    Aghakhani, Hojjat
    Meng, Dongyu
    Wang, Yu-Xiang
    Kruegel, Christopher
    Vigna, Giovanni
    2021 IEEE EUROPEAN SYMPOSIUM ON SECURITY AND PRIVACY (EUROS&P 2021), 2021, : 159 - 178
  • [8] A Textual Clean-Label Backdoor Attack Strategy against Spam Detection
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    2021 14TH INTERNATIONAL CONFERENCE ON SECURITY OF INFORMATION AND NETWORKS (SIN 2021), 2021,
  • [9] Practical clean-label backdoor attack against static malware detection
    Zhan, Dazhi
    Xu, Kun
    Liu, Xin
    Han, Tong
    Pan, Zhisong
    Guo, Shize
    COMPUTERS & SECURITY, 2025, 150
  • [10] Clean-label Backdoor Attack on Machine Learning-based Malware Detection Models and Countermeasures
    Zheng, Wanjia
    Omote, Kazumasa
    2022 IEEE INTERNATIONAL CONFERENCE ON TRUST, SECURITY AND PRIVACY IN COMPUTING AND COMMUNICATIONS, TRUSTCOM, 2022, : 1235 - 1242