Clean-label attack based on negative afterimage on neural networks

被引:0
作者
Zang, Liguang [1 ]
Li, Yuancheng [1 ]
机构
[1] North China Elect Power Univ, Dept Control & Comp Engineeringg, Beijing 102206, Peoples R China
关键词
Clean-label attack; Adversarial attacks; Afterimage; Machine learning;
D O I
10.1007/s13042-024-02230-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks can fool machine learning models by adding small but carefully designed perturbations to an image, resulting in the model to misclassify it into another class. In general, an attacker chooses a target class as a pollution source to generate a perturbed image and tricks the classifier into classifying it as the target class. However, the effectiveness of adversarial attacks suffers from some limitations, including the need for specific control over the labeling of the target class and challenges in generating effective adversarial samples in real-world scenarios. To overcome these limitations, we propose a novel clean-label attack based on negative afterimage. Negative afterimage is a visual phenomenon whereby, after gazing at a bright image for some time, one perceives the inverse color image of the previously observed bright region when shifting focus to a darker area. We explore the afterimage phenomenon and use negative afterimages as pollution sources to generate adversarial samples, which can avoid any dependency on the labeling of pollution sources. Subsequently, we generate adversarial samples using an optimization-based approach to ensure that adversarial samples are visually undetectable. We conducted experiments on two widely used datasets, CIFAR-10 and ImageNet. The results show that our proposed method can effectively generate adversarial samples with high stealthiness.
引用
收藏
页码:449 / 460
页数:12
相关论文
共 50 条
[21]   AdvAttackVis: An Adversarial Attack Visualization System for Deep Neural Networks [J].
Ding Wei-jie ;
Shen Xuchen ;
Yuan Ying ;
Mao Ting-yun ;
Sun Guo-dao ;
Chen Li-li ;
Chen Bing-ting .
INTERNATIONAL JOURNAL OF ADVANCED COMPUTER SCIENCE AND APPLICATIONS, 2024, 15 (05) :383-391
[22]   Extensions and Detailed Analysis of Synergy Between Traditional Classification and Classification Based on Negative Features in Deep Convolutional Neural Networks [J].
Rackovic, Milos ;
Vidakovic, Jovana ;
Milosevic, Nemanja .
COGNITIVE COMPUTATION, 2025, 17 (01)
[23]   Disruptive attacks on artificial neural networks: A systematic review of attack techniques, detection methods, and protection strategies [J].
Alobaid, Ahmad ;
Bonny, Talal ;
Alrahhal, Maher .
INTELLIGENT SYSTEMS WITH APPLICATIONS, 2025, 26
[24]   DeepRover: A Query-Efficient Blackbox Attack for Deep Neural Networks [J].
Zhang, Fuyuan ;
Hu, Xinwen ;
Ma, Lei ;
Zhao, Jianjun .
PROCEEDINGS OF THE 31ST ACM JOINT MEETING EUROPEAN SOFTWARE ENGINEERING CONFERENCE AND SYMPOSIUM ON THE FOUNDATIONS OF SOFTWARE ENGINEERING, ESEC/FSE 2023, 2023, :1384-1394
[25]   A multitarget backdooring attack on deep neural networks with random location trigger [J].
Xiao, Yu ;
Cong, Liu ;
Mingwen, Zheng ;
Yajie, Wang ;
Xinrui, Liu ;
Shuxiao, Song ;
Yuexuan, Ma ;
Jun, Zheng .
INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2022, 37 (03) :2567-2583
[26]   Projective Ranking: A Transferable Evasion Attack Method on Graph Neural Networks [J].
Zhang, He ;
Wu, Bang ;
Yang, Xiangwen ;
Zhou, Chuan ;
Wang, Shuo ;
Yuan, Xingliang ;
Pan, Shirui .
PROCEEDINGS OF THE 30TH ACM INTERNATIONAL CONFERENCE ON INFORMATION & KNOWLEDGE MANAGEMENT, CIKM 2021, 2021, :3617-3621
[27]   Diversity Adversarial Training against Adversarial Attack on Deep Neural Networks [J].
Kwon, Hyun ;
Lee, Jun .
SYMMETRY-BASEL, 2021, 13 (03)
[28]   Exploration of Membership Inference Attack on Convolutional Neural Networks and Its Defenses [J].
Yao, Yimian .
2022 INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, COMPUTER VISION AND MACHINE LEARNING (ICICML), 2022, :604-610
[29]   Synergy between traditional classification and classification based on negative features in deep convolutional neural networks [J].
Milosevic, Nemanja ;
Rackovic, Milos .
NEURAL COMPUTING & APPLICATIONS, 2021, 33 (13) :7593-7602
[30]   Label-Only Membership Inference Attack Based on Model Explanation [J].
Ma, Yao ;
Zhai, Xurong ;
Yu, Dan ;
Yang, Yuli ;
Wei, Xingyu ;
Chen, Yongle .
NEURAL PROCESSING LETTERS, 2024, 56 (05)