GeoDA: a geometric framework for black-box adversarial attacks

被引:41
|
作者
Rahmati, Ali [1 ]
Moosavi-Dezfooli, Seyed-Mohsen [2 ]
Frossard, Pascal [3 ]
Dai, Huaiyu [1 ]
机构
[1] North Carolina State Univ, Dept ECE, Raleigh, NC 27695 USA
[2] Swiss Fed Inst Technol, Inst Machine Learning, Zurich, Switzerland
[3] Ecole Polytech Fed Lausanne, Lausanne, Switzerland
来源
2020 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2020) | 2020年
基金
美国国家科学基金会;
关键词
D O I
10.1109/CVPR42600.2020.00847
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Adversarial examples are known as carefully perturbed images fooling image classifiers. We propose a geometric framework to generate adversarial examples in one of the most challenging black-box settings where the adversary can only generate a small number of queries, each of them returning the top-1 label of the classifier. Our framework is based on the observation that the decision boundary of deep networks usually has a small mean curvature in the vicinity of data samples. We propose an effective iterative algorithm to generate query-efficient black-box perturbations with small l(p) norms for p >= 1, which is confirmed via experimental evaluations on state-of-the-art natural image classifiers. Moreover, for p = 2, we theoretically show that our algorithm actually converges to the minimal l(2)-perturbation when the curvature of the decision boundary is bounded. We also obtain the optimal distribution of the queries over the iterations of the algorithm. Finally, experimental results confirm that our principled black-box attack algorithm performs better than state-of-the-art algorithms as it generates smaller perturbations with a reduced number of queries.(1)
引用
收藏
页码:8443 / 8452
页数:10
相关论文
共 50 条
  • [1] Simple Black-box Adversarial Attacks
    Guo, Chuan
    Gardner, Jacob R.
    You, Yurong
    Wilson, Andrew Gordon
    Weinberger, Kilian Q.
    INTERNATIONAL CONFERENCE ON MACHINE LEARNING, VOL 97, 2019, 97
  • [2] Resiliency of SNN on Black-Box Adversarial Attacks
    Paudel, Bijay Raj
    Itani, Aashish
    Tragoudas, Spyros
    20TH IEEE INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND APPLICATIONS (ICMLA 2021), 2021, : 799 - 806
  • [3] Black-box Adversarial Attacks in Autonomous Vehicle Technology
    Kumar, K. Naveen
    Vishnu, C.
    Mitra, Reshmi
    Mohan, C. Krishna
    2020 IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP (AIPR): TRUSTED COMPUTING, PRIVACY, AND SECURING MULTIMEDIA, 2020,
  • [4] Black-box Adversarial Attacks on Video Recognition Models
    Jiang, Linxi
    Ma, Xingjun
    Chen, Shaoxiang
    Bailey, James
    Jiang, Yu-Gang
    PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 864 - 872
  • [5] Black-box adversarial attacks by manipulating image attributes
    Wei, Xingxing
    Guo, Ying
    Li, Bo
    INFORMATION SCIENCES, 2021, 550 : 285 - 296
  • [6] Physical Black-Box Adversarial Attacks Through Transformations
    Jiang, Wenbo
    Li, Hongwei
    Xu, Guowen
    Zhang, Tianwei
    Lu, Rongxing
    IEEE TRANSACTIONS ON BIG DATA, 2023, 9 (03) : 964 - 974
  • [7] Boosting Black-Box Adversarial Attacks with Meta Learning
    Fu, Junjie
    Sun, Jian
    Wang, Gang
    2022 41ST CHINESE CONTROL CONFERENCE (CCC), 2022, : 7308 - 7313
  • [8] A review of black-box adversarial attacks on image classification
    Zhu, Yanfei
    Zhao, Yaochi
    Hu, Zhuhua
    Luo, Tan
    He, Like
    NEUROCOMPUTING, 2024, 610
  • [9] Curls & Whey: Boosting Black-Box Adversarial Attacks
    Shi, Yucheng
    Wang, Siyu
    Han, Yahong
    2019 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR 2019), 2019, : 6512 - 6520
  • [10] Boundary Defense Against Black-box Adversarial Attacks
    Aithal, Manjushree B.
    Li, Xiaohua
    2022 26TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION (ICPR), 2022, : 2349 - 2356