Multi-Targeted Backdoor: Indentifying Backdoor Attack for Multiple Deep Neural Networks

被引:22
|
作者
Kwon, Hyun [1 ,2 ]
Yoon, Hyunsoo [1 ]
Park, Ki-Woong [3 ]
机构
[1] Korea Adv Inst Sci & Technol, Sch Comp, Daejeon, South Korea
[2] Korea Mil Acad, Dept Elect Engn, Seoul, South Korea
[3] Sejong Univ, Dept Comp & Informat Secur, Seoul, South Korea
来源
基金
新加坡国家研究基金会;
关键词
machine learning; deep neural network; backdoor attack; poisoning attack; adversarial example;
D O I
10.1587/transinf.2019EDL8170
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
We propose a multi-targeted backdoor that misleads different models to different classes. The method trains multiple models with data that include specific triggers that will be misclassified by different models into different classes. For example, an attacker can use a single multi-targeted backdoor sample to make model A recognize it as a stop sign, model B as a left-turn sign, model C as a right-turn sign, and model D as a U-turn sign. We used MNIST and Fashion-MNIST as experimental datasets and Tensorflow as a machine learning library. Experimental results show that the proposed method with a trigger can cause misclassification as different classes by different models with a 100% attack success rate on MNIST and Fashion-MNIST while maintaining the 97.18% and 91.1% accuracy, respectively, on data without a trigger.
引用
收藏
页码:883 / 887
页数:5
相关论文
共 50 条
  • [41] INVISIBLE AND EFFICIENT BACKDOOR ATTACKS FOR COMPRESSED DEEP NEURAL NETWORKS
    Phan, Huy
    Xie, Yi
    Liu, Jian
    Chen, Yingying
    Yuan, Bo
    2022 IEEE INTERNATIONAL CONFERENCE ON ACOUSTICS, SPEECH AND SIGNAL PROCESSING (ICASSP), 2022, : 96 - 100
  • [42] Backdoor Attacks on Image Classification Models in Deep Neural Networks
    Zhang, Quanxin
    Ma, Wencong
    Wang, Yajie
    Zhang, Yaoyuan
    Shi, Zhiwei
    Li, Yuanzhang
    CHINESE JOURNAL OF ELECTRONICS, 2022, 31 (02) : 199 - 212
  • [43] Multi-Targeted Adversarial Example in Evasion Attack on Deep Neural Network
    Kwon, Hyun
    Kim, Yongchul
    Park, Ki-Woong
    Yoon, Hyunsoo
    Choi, Daeseon
    IEEE ACCESS, 2018, 6 : 46084 - 46096
  • [44] Natural Backdoor Attacks on Deep Neural Networks via Raindrops
    Zhao, Feng
    Zhou, Li
    Zhong, Qi
    Lan, Rushi
    Zhang, Leo Yu
    SECURITY AND COMMUNICATION NETWORKS, 2022, 2022
  • [45] Backdoor Attack and Defense on Deep Learning: A Survey
    Bai, Yang
    Xing, Gaojie
    Wu, Hongyan
    Rao, Zhihong
    Ma, Chuan
    Wang, Shiping
    Liu, Xiaolei
    Zhou, Yimin
    Tang, Jiajia
    Huang, Kaijun
    Kang, Jiale
    IEEE TRANSACTIONS ON COMPUTATIONAL SOCIAL SYSTEMS, 2025, 12 (01): : 404 - 434
  • [46] Backdoor Mitigation in Deep Neural Networks via Strategic Retraining
    Dhonthi, Akshay
    Hahn, Ernst Moritz
    Hashemi, Vahid
    FORMAL METHODS, FM 2023, 2023, 14000 : 635 - 647
  • [47] Backdoor Attacks on Image Classification Models in Deep Neural Networks
    ZHANG Quanxin
    MA Wencong
    WANG Yajie
    ZHANG Yaoyuan
    SHI Zhiwei
    LI Yuanzhang
    Chinese Journal of Electronics, 2022, 31 (02) : 199 - 212
  • [48] Imperceptible and multi-channel backdoor attack
    Xue, Mingfu
    Ni, Shifeng
    Wu, Yinghao
    Zhang, Yushu
    Liu, Weiqiang
    APPLIED INTELLIGENCE, 2024, 54 (01) : 1099 - 1116
  • [49] An Imperceptible Data Augmentation Based Blackbox Clean-Label Backdoor Attack on Deep Neural Networks
    Xu, Chaohui
    Liu, Wenye
    Zheng, Yue
    Wang, Si
    Chang, Chip-Hong
    IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS I-REGULAR PAPERS, 2023, 70 (12) : 5011 - 5024
  • [50] Imperceptible and multi-channel backdoor attack
    Mingfu Xue
    Shifeng Ni
    Yinghao Wu
    Yushu Zhang
    Weiqiang Liu
    Applied Intelligence, 2024, 54 : 1099 - 1116