TAGA: A Transfer-based Black-box Adversarial Attack with Genetic Algorithms

被引:0
|
作者
Huang, Liang-Jung [1 ]
Yu, Tian-Li [1 ]
机构
[1] Natl Taiwan Univ, Taiwan Evolutionary Intelligence Lab, Dept Elect Engn, Taipei, Taiwan
来源
PROCEEDINGS OF THE 2022 GENETIC AND EVOLUTIONARY COMPUTATION CONFERENCE (GECCO'22) | 2022年
关键词
Deep Learning; Neural Networks; Adversarial Attacks; Genetic; Algorithms;
D O I
10.1145/3512290.3528699
中图分类号
TP3 [计算技术、计算机技术];
学科分类号
0812 ;
摘要
Deep learning has been widely adopted in many real-world applications, especially in image classification. However, researches have shown that minor distortions imperceptible to humans may mislead classifiers. One way to improve the robustness is using adversarial attacks to obtain adversarial examples and re-training the classifier with those images. However, the connections between attacks and application scenarios are rarely discussed. This paper proposes a novel black-box adversarial attack that is specifically designed for real-world application scenarios: The transfer-based black-box adversarial attack with genetic algorithms (TAGA). TAGA adopts a genetic algorithm to generate the adversarial examples and reduces the ensuing query costs with a surrogate model based on the transferability of adversarial attacks. Empirical results show that perturbing embeddings in the latent space helps the attack algorithm quickly obtain adversarial examples and that the surrogate fitness function reduces the number of function evaluations. Compared with several state-of-the-art attacks, TAGA improves the classifiers more under the application scenario in terms of the summation of natural and defense accuracy.
引用
收藏
页码:712 / 720
页数:9
相关论文
共 50 条
  • [31] POBA-GA: Perturbation optimized black-box adversarial attacks via genetic algorithm
    Chen, Jinyin
    Su, Mengmeng
    Shen, Shijing
    Xiong, Hui
    Zheng, Haibin
    COMPUTERS & SECURITY, 2019, 85 : 89 - 106
  • [32] An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers
    Yutian Zhou
    Yu-an Tan
    Quanxin Zhang
    Xiaohui Kuang
    Yahong Han
    Jingjing Hu
    Mobile Networks and Applications, 2021, 26 : 1616 - 1629
  • [33] An Evolutionary-Based Black-Box Attack to Deep Neural Network Classifiers
    Zhou, Yutian
    Tan, Yu-an
    Zhang, Quanxin
    Kuang, Xiaohui
    Han, Yahong
    Hu, Jingjing
    MOBILE NETWORKS & APPLICATIONS, 2021, 26 (04) : 1616 - 1629
  • [34] Besting the Black-Box: Barrier Zones for Adversarial Example Defense
    Mahmood, Kaleel
    Phuong Ha Nguyen
    Nguyen, Lam M.
    Nguyen, Thanh
    Van Dijk, Marten
    IEEE ACCESS, 2022, 10 : 1451 - 1474
  • [35] Sensitive region-aware black-box adversarial attacks
    Lin, Chenhao
    Han, Sicong
    Zhu, Jiongli
    Li, Qian
    Shen, Chao
    Zhang, Youwei
    Guan, Xiaohong
    INFORMATION SCIENCES, 2023, 637
  • [36] Adversarial Examples versus Cloud-Based Detectors: A Black-Box Empirical Study
    Li, Xurong
    Ji, Shouling
    Han, Meng
    Ji, Juntao
    Ren, Zhenyu
    Liu, Yushan
    Wu, Chunming
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2021, 18 (04) : 1933 - 1949
  • [37] A Brute-Force Black-Box Method to Attack Machine Learning-Based Systems in Cybersecurity
    Zhang, Sicong
    Xie, Xiaoyao
    Xu, Yang
    IEEE ACCESS, 2020, 8 : 128250 - 128263
  • [38] Taking Care of the Discretization Problem: A Comprehensive Study of the Discretization Problem and a Black-Box Adversarial Attack in Discrete Integer Domain
    Bu, Lei
    Zhao, Zhe
    Duan, Yuchao
    Song, Fu
    IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING, 2022, 19 (05) : 3200 - 3217
  • [39] Black-Box Audio Adversarial Example Generation Using Variational Autoencoder
    Zong, Wei
    Chow, Yang-Wai
    Susilo, Willy
    INFORMATION AND COMMUNICATIONS SECURITY (ICICS 2021), PT II, 2021, 12919 : 142 - 160
  • [40] Black-box attacks against log anomaly detection with adversarial examples
    Lu, Siyang
    Wang, Mingquan
    Wang, Dongdong
    Wei, Xiang
    Xiao, Sizhe
    Wang, Zhiwei
    Han, Ningning
    Wang, Liqiang
    INFORMATION SCIENCES, 2023, 619 : 249 - 262