An Adversarial Network-based Multi-model Black-box Attack

被引:0
|
作者
Lin, Bin [1 ]
Chen, Jixin [2 ]
Zhang, Zhihong [3 ]
Lai, Yanlin [2 ]
Wu, Xinlong [2 ]
Tian, Lulu [4 ]
Cheng, Wangchi [5 ]
机构
[1] Sichuan Normal Univ, Chengdu 610066, Peoples R China
[2] Southwest Petr Univ, Sch Comp Sci, Chengdu 610500, Peoples R China
[3] AECC Sichuan Gas Turbine Estab, Mianyang 621700, Sichuan, Peoples R China
[4] Brunel Univ London, Uxbridge UB8 3PH, Middx, England
[5] Inst Logist Sci & Technol, Beijing 100166, Peoples R China
关键词
Black-box attack; adversarial examples; GAN; multi-model; deep neural networks;
D O I
10.32604/iasc.2021.016818
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Researches have shown that Deep neural networks (DNNs) are vulnerable to adversarial examples. In this paper, we propose a generative model to explore how to produce adversarial examples that can deceive multiple deep learning models simultaneously. Unlike most of popular adversarial attack algorithms, the one proposed in this paper is based on the Generative Adversarial Networks (GAN). It can quickly produce adversarial examples and perform black-box attacks on multi-model. To enhance the transferability of the samples generated by our approach, we use multiple neural networks in the training process. Experimental results on MNIST showed that our method can efficiently generate adversarial examples. Moreover, it can successfully attack various classes of deep neural networks at the same time, such as fully connected neural networks (FCNN), convolutional neural networks (CNN) and recurrent neural networks (RNN). We performed a black-box attack on VGG16 and the experimental results showed that when the test data classes are ten (0-9), the attack success rate is 97.68%, and when the test data classes are seven (0-6), the attack success rate is up to 98.25%.
引用
收藏
页码:641 / 649
页数:9
相关论文
共 50 条
  • [1] Black-box Bayesian adversarial attack with transferable priors
    Zhang, Shudong
    Gao, Haichang
    Shu, Chao
    Cao, Xiwen
    Zhou, Yunyi
    He, Jianping
    MACHINE LEARNING, 2024, 113 (04) : 1511 - 1528
  • [2] Black-box Bayesian adversarial attack with transferable priors
    Shudong Zhang
    Haichang Gao
    Chao Shu
    Xiwen Cao
    Yunyi Zhou
    Jianping He
    Machine Learning, 2024, 113 : 1511 - 1528
  • [3] SIMULATOR ATTACK plus FOR BLACK-BOX ADVERSARIAL ATTACK
    Ji, Yimu
    Ding, Jianyu
    Chen, Zhiyu
    Wu, Fei
    Zhang, Chi
    Sun, Yiming
    Sun, Jing
    Liu, Shangdong
    2022 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING, ICIP, 2022, : 636 - 640
  • [4] FLDATN: Black-Box Attack for Face Liveness Detection Based on Adversarial Transformation Network
    Peng, Yali
    Liu, Jianbo
    Long, Min
    Peng, Fei
    INTERNATIONAL JOURNAL OF INTELLIGENT SYSTEMS, 2024, 2024
  • [5] Iterative Training Attack: A Black-Box Adversarial Attack via Perturbation Generative Network
    Lei, Hong
    Jiang, Wei
    Zhan, Jinyu
    You, Shen
    Jin, Lingxin
    Xie, Xiaona
    Chang, Zhengwei
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2023, 32 (18)
  • [6] Saliency Attack: Towards Imperceptible Black-box Adversarial Attack
    Dai, Zeyu
    Liu, Shengcai
    Li, Qing
    Tang, Ke
    ACM TRANSACTIONS ON INTELLIGENT SYSTEMS AND TECHNOLOGY, 2023, 14 (03)
  • [7] Substitute Meta-Learning for Black-Box Adversarial Attack
    Hu, Cong
    Xu, Hao-Qi
    Wu, Xiao-Jun
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2472 - 2476
  • [8] Local Black-box Adversarial Attack based on Random Segmentation Channel
    Xu, Li
    Yang, Zejin
    Guo, Huiting
    Wan, Xu
    Fan, Chunlong
    PROCEEDINGS OF THE 2024 27 TH INTERNATIONAL CONFERENCE ON COMPUTER SUPPORTED COOPERATIVE WORK IN DESIGN, CSCWD 2024, 2024, : 1437 - 1442
  • [9] A low-query black-box adversarial attack based on transferability
    Ding, Kangyi
    Liu, Xiaolei
    Niu, Weina
    Hu, Teng
    Wang, Yanping
    Zhang, Xiaosong
    KNOWLEDGE-BASED SYSTEMS, 2021, 226
  • [10] Self-Taught Black-Box Adversarial Attack to Multilayer Network Automation
    Pan, Xiaoqin
    Zhu, Zuqing
    2022 IEEE GLOBAL COMMUNICATIONS CONFERENCE (GLOBECOM 2022), 2022, : 2134 - 2139