An Adversarial Network-based Multi-model Black-box Attack

被引:0
|
作者
Lin, Bin [1 ]
Chen, Jixin [2 ]
Zhang, Zhihong [3 ]
Lai, Yanlin [2 ]
Wu, Xinlong [2 ]
Tian, Lulu [4 ]
Cheng, Wangchi [5 ]
机构
[1] Sichuan Normal Univ, Chengdu 610066, Peoples R China
[2] Southwest Petr Univ, Sch Comp Sci, Chengdu 610500, Peoples R China
[3] AECC Sichuan Gas Turbine Estab, Mianyang 621700, Sichuan, Peoples R China
[4] Brunel Univ London, Uxbridge UB8 3PH, Middx, England
[5] Inst Logist Sci & Technol, Beijing 100166, Peoples R China
关键词
Black-box attack; adversarial examples; GAN; multi-model; deep neural networks;
D O I
10.32604/iasc.2021.016818
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Researches have shown that Deep neural networks (DNNs) are vulnerable to adversarial examples. In this paper, we propose a generative model to explore how to produce adversarial examples that can deceive multiple deep learning models simultaneously. Unlike most of popular adversarial attack algorithms, the one proposed in this paper is based on the Generative Adversarial Networks (GAN). It can quickly produce adversarial examples and perform black-box attacks on multi-model. To enhance the transferability of the samples generated by our approach, we use multiple neural networks in the training process. Experimental results on MNIST showed that our method can efficiently generate adversarial examples. Moreover, it can successfully attack various classes of deep neural networks at the same time, such as fully connected neural networks (FCNN), convolutional neural networks (CNN) and recurrent neural networks (RNN). We performed a black-box attack on VGG16 and the experimental results showed that when the test data classes are ten (0-9), the attack success rate is 97.68%, and when the test data classes are seven (0-6), the attack success rate is up to 98.25%.
引用
收藏
页码:641 / 649
页数:9
相关论文
共 50 条
  • [31] GreedyFool: Multi-factor imperceptibility and its application to designing a black-box adversarial attack
    Liu, Hui
    Zhao, Bo
    Ji, Minzhi
    Li, Mengchen
    Liu, Peng
    INFORMATION SCIENCES, 2022, 613 : 717 - 730
  • [32] Dual stage black-box adversarial attack against vision transformer
    Wang, Fan
    Shao, Mingwen
    Meng, Lingzhuang
    Liu, Fukang
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (08) : 3367 - 3378
  • [33] Targeted Black-Box Adversarial Attack Method for Image Classification Models
    Zheng, Su
    Chen, Jialin
    Wang, Lingli
    2019 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2019,
  • [34] Black-Box Audio Adversarial Attack Using Particle Swarm Optimization
    Mun, Hyunjun
    Seo, Sunggwan
    Son, Baehoon
    Yun, Joobeom
    IEEE ACCESS, 2022, 10 : 23532 - 23544
  • [35] Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks
    Huang, Lifeng
    Wei, Shuxin
    Gao, Chengying
    Liu, Ning
    PATTERN RECOGNITION, 2022, 131
  • [36] Improved black-box attack based on query and perturbation distribution
    Zhao, Weiwei
    Zeng, Zhigang
    2021 13TH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE (ICACI), 2021, : 117 - 125
  • [37] A New Meta-learning-based Black-box Adversarial Attack: SA-CC
    Ding, Jianyu
    Chen, Zhiyu
    2022 34TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2022, : 4326 - 4331
  • [38] Adversarial Black-Box Attacks Against Network Intrusion Detection Systems: A Survey
    Alatwi, Huda Ali
    Aldweesh, Amjad
    2021 IEEE WORLD AI IOT CONGRESS (AIIOT), 2021, : 34 - 40
  • [39] A comprehensive transplanting of black-box adversarial attacks from multi-class to multi-label models
    Chen, Zhijian
    Zhou, Qi
    Liu, Yujiang
    Luo, Wenjian
    COMPLEX & INTELLIGENT SYSTEMS, 2025, 11 (04)
  • [40] Object-Aware Transfer-Based Black-Box Adversarial Attack on Object Detector
    Leng, Zhuo
    Cheng, Zesen
    Wei, Pengxu
    Chen, Jie
    PATTERN RECOGNITION AND COMPUTER VISION, PRCV 2023, PT XII, 2024, 14436 : 278 - 289