Clean-label attack based on negative afterimage on neural networks

被引:0
作者
Zang, Liguang [1 ]
Li, Yuancheng [1 ]
机构
[1] North China Elect Power Univ, Dept Control & Comp Engineeringg, Beijing 102206, Peoples R China
关键词
Clean-label attack; Adversarial attacks; Afterimage; Machine learning;
D O I
10.1007/s13042-024-02230-3
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial attacks can fool machine learning models by adding small but carefully designed perturbations to an image, resulting in the model to misclassify it into another class. In general, an attacker chooses a target class as a pollution source to generate a perturbed image and tricks the classifier into classifying it as the target class. However, the effectiveness of adversarial attacks suffers from some limitations, including the need for specific control over the labeling of the target class and challenges in generating effective adversarial samples in real-world scenarios. To overcome these limitations, we propose a novel clean-label attack based on negative afterimage. Negative afterimage is a visual phenomenon whereby, after gazing at a bright image for some time, one perceives the inverse color image of the previously observed bright region when shifting focus to a darker area. We explore the afterimage phenomenon and use negative afterimages as pollution sources to generate adversarial samples, which can avoid any dependency on the labeling of pollution sources. Subsequently, we generate adversarial samples using an optimization-based approach to ensure that adversarial samples are visually undetectable. We conducted experiments on two widely used datasets, CIFAR-10 and ImageNet. The results show that our proposed method can effectively generate adversarial samples with high stealthiness.
引用
收藏
页码:449 / 460
页数:12
相关论文
共 50 条
[31]   Synergy between traditional classification and classification based on negative features in deep convolutional neural networks [J].
Nemanja Milošević ;
Miloš Racković .
Neural Computing and Applications, 2021, 33 :7593-7602
[32]   Neural Network Based Attack on a Masked Implementation of AES [J].
Gilmore, Richard ;
Hanley, Neil ;
O'Neill, Maire .
2015 IEEE INTERNATIONAL SYMPOSIUM ON HARDWARE ORIENTED SECURITY AND TRUST (HOST), 2015, :106-111
[33]   Dual-Targeted adversarial example in evasion attack on graph neural networks [J].
Kwon, Hyun ;
Kim, Dae-Jin .
SCIENTIFIC REPORTS, 2025, 15 (01)
[34]   Correlation-Aware Neural Networks for DDoS Attack Detection in IoT Systems [J].
Hekmati, Arvin ;
Zhang, Jiahe ;
Sarkar, Tamoghna ;
Jethwa, Nishant ;
Grippo, Eugenio ;
Krishnamachari, Bhaskar .
IEEE-ACM TRANSACTIONS ON NETWORKING, 2024, 32 (05) :3929-3944
[35]   Neural Networks for DDoS Attack Detection using an Enhanced Urban IoT Dataset [J].
Hekmati, Arvin ;
Grippo, Eugenio ;
Krishnamachari, Bhaskar .
2022 31ST INTERNATIONAL CONFERENCE ON COMPUTER COMMUNICATIONS AND NETWORKS (ICCCN 2022), 2022,
[36]   SPA: An Efficient Adversarial Attack on Spiking Neural Networks using Spike Probabilistic [J].
Lin, Xuanwei ;
Dong, Chen ;
Liu, Ximeng ;
Zhang, Yuanyuan .
2022 22ND IEEE/ACM INTERNATIONAL SYMPOSIUM ON CLUSTER, CLOUD AND INTERNET COMPUTING (CCGRID 2022), 2022, :366-375
[37]   Ml-rbf: RBF Neural Networks for Multi-Label Learning [J].
Zhang, Min-Ling .
NEURAL PROCESSING LETTERS, 2009, 29 (02) :61-74
[38]   Measuring Overfitting in Convolutional Neural Networks using Adversarial Perturbations and Label Noise [J].
Pavlitskaya, Svetlana ;
Oswald, Joel ;
Zollner, J. Marius .
2022 IEEE SYMPOSIUM SERIES ON COMPUTATIONAL INTELLIGENCE (SSCI), 2022, :1551-1559
[39]   Ml-rbf: RBF Neural Networks for Multi-Label Learning [J].
Min-Ling Zhang .
Neural Processing Letters, 2009, 29 :61-74
[40]   Threshold Switching Memristor-Based Radial-Based Spiking Neuron Circuit for Conversion-Based Spiking Neural Networks Adversarial Attack Improvement [J].
Wu, Zuheng ;
Li, Wei ;
Zou, Jianxun ;
Feng, Zhe ;
Chen, Tao ;
Fang, Xiuquan ;
Li, Xing ;
Zhu, Yunlai ;
Xu, Zuyu ;
Dai, Yuehua .
IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS II-EXPRESS BRIEFS, 2024, 71 (03) :1446-1450