An Imperceptible Data Augmentation Based Blackbox Clean-Label Backdoor Attack on Deep Neural Networks

被引:5
|
作者
Xu, Chaohui [1 ]
Liu, Wenye [1 ]
Zheng, Yue [1 ]
Wang, Si [1 ]
Chang, Chip-Hong [1 ]
机构
[1] Nanyang Technol Univ, Sch Elect & Elect Engn, Singapore 639798, Singapore
基金
新加坡国家研究基金会;
关键词
Training; Neurons; Closed box; Artificial neural networks; Perturbation methods; Edge computing; Data augmentation; Clean-label backdoor attack; data augmentation; data poisoning; deep neural networks; edge AI;
D O I
10.1109/TCSI.2023.3298802
中图分类号
TM [电工技术]; TN [电子技术、通信技术];
学科分类号
0808 ; 0809 ;
摘要
Deep neural networks (DNNs) have permeated into many diverse application domains, making them attractive targets of malicious attacks. DNNs are particularly susceptible to data poisoning attacks. Such attacks can be made more venomous and harder to detect by poisoning the training samples without changing their ground-truth labels. Despite its pragmatism, the clean-label requirement imposes a stiff restriction and strong conflict in simultaneous optimization of attack stealth, success rate, and utility of the poisoned model. Attempts to circumvent the pitfalls often lead to a high injection rate, ineffective embedded backdoors, unnatural triggers, low transferability, and/or poor robustness. In this paper, we overcome these constraints by amalgamating different data augmentation techniques for the backdoor trigger. The spatial intensities of the augmentation methods are iteratively adjusted by interpolating the clean sample and its augmented version according to their tolerance to perceptual loss and augmented feature saliency to target class activation. Our proposed attack is comprehensively evaluated on different network models and datasets. Compared with state-of-the-art clean-label backdoor attacks, it has lower injection rate, stealthier poisoned samples, higher attack success rate, and greater backdoor mitigation resistance while preserving high benign accuracy. Similar attack success rates are also demonstrated on the Intel Neural Compute Stick 2 edge AI device implementation of the poisoned model after weight-pruning and quantization.
引用
收藏
页码:5011 / 5024
页数:14
相关论文
共 50 条
  • [1] Invisible Poison: A Blackbox Clean Label Backdoor Attack to Deep Neural Networks
    Ning, Rui
    Li, Jiang
    Xin, Chunsheng
    Wu, Hongyi
    IEEE CONFERENCE ON COMPUTER COMMUNICATIONS (IEEE INFOCOM 2021), 2021,
  • [2] Inconspicuous Data Augmentation Based Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    2022 IEEE 35TH INTERNATIONAL SYSTEM-ON-CHIP CONFERENCE (IEEE SOCC 2022), 2022, : 237 - 242
  • [3] Clean-label attack based on negative afterimage on neural networks
    Zang, Liguang
    Li, Yuancheng
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2025, 16 (01) : 449 - 460
  • [4] NARCISSUS: A Practical Clean-Label Backdoor Attack with Limited Information
    Zeng, Yi
    Pan, Minzhou
    Just, Hoang Anh
    Lyu, Lingjuan
    Qiu, Meikang
    Jia, Ruoxi
    PROCEEDINGS OF THE 2023 ACM SIGSAC CONFERENCE ON COMPUTER AND COMMUNICATIONS SECURITY, CCS 2023, 2023, : 771 - 785
  • [5] One-to-Multiple Clean-Label Image Camouflage (OmClic) based backdoor attack on deep learning
    Wang, Guohong
    Ma, Hua
    Gao, Yansong
    Abuadbba, Alsharif
    Zhang, Zhi
    Kang, Wei
    Al-Sarawi, Said F.
    Zhang, Gongxuan
    Abbott, Derek
    KNOWLEDGE-BASED SYSTEMS, 2024, 288
  • [6] Untargeted Backdoor Attack Against Deep Neural Networks With Imperceptible Trigger
    Xue, Mingfu
    Wu, Yinghao
    Ni, Shifeng
    Zhang, Leo Yu
    Zhang, Yushu
    Liu, Weiqiang
    IEEE TRANSACTIONS ON INDUSTRIAL INFORMATICS, 2024, 20 (03) : 5004 - 5013
  • [7] A Clean-Label Graph Backdoor Attack Method in Node Classification Task
    Xing, Xiaogang
    Xu, Ming
    Bai, Yujing
    Yang, Dongdong
    KNOWLEDGE-BASED SYSTEMS, 2024, 304
  • [8] A Textual Clean-Label Backdoor Attack Strategy against Spam Detection
    Yerlikaya, Fahri Anil
    Bahtiyar, Serif
    2021 14TH INTERNATIONAL CONFERENCE ON SECURITY OF INFORMATION AND NETWORKS (SIN 2021), 2021,
  • [9] Clean-label backdoor attack and defense: An examination of language model vulnerability
    Zhao, Shuai
    Xu, Xiaoyu
    Xiao, Luwei
    Wen, Jinming
    Tuan, Luu Anh
    EXPERT SYSTEMS WITH APPLICATIONS, 2025, 265
  • [10] Practical clean-label backdoor attack against static malware detection
    Zhan, Dazhi
    Xu, Kun
    Liu, Xin
    Han, Tong
    Pan, Zhisong
    Guo, Shize
    COMPUTERS & SECURITY, 2025, 150