Superpixel Attack Enhancing Black-Box Adversarial Attack with Image-Driven Division Areas

被引:0
|
作者
Oe, Issa [1 ]
Yamamura, Keiichiro [1 ]
Ishikura, Hiroki [1 ]
Hamahira, Ryo [1 ]
Fujisawa, Katsuki [2 ]
机构
[1] Kyushu Univ, Grad Sch Math, Fukuoka, Japan
[2] Kyushu Univ, Inst Math Ind, Fukuoka, Japan
来源
ADVANCES IN ARTIFICIAL INTELLIGENCE, AI 2023, PT I | 2024年 / 14471卷
基金
日本科学技术振兴机构;
关键词
adversarial attack; security for AI; computer vision; deep learning;
D O I
10.1007/978-981-99-8388-9_12
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Deep learning models are used in safety-critical tasks such as automated driving and face recognition. However, small perturbations in the model input can significantly change the predictions. Adversarial attacks are used to identify small perturbations that can lead to misclassifications. More powerful black-box adversarial attacks are required to develop more effective defenses. A promising approach to black-box adversarial attacks is to repeat the process of extracting a specific image area and changing the perturbations added to it. Existing attacks adopt simple rectangles as the areas where perturbations are changed in a single iteration. We propose applying superpixels instead, which achieve a good balance between color variance and compactness. We also propose a new search method, versatile search, and a novel attack method, Superpixel Attack, which applies superpixels and performs versatile search. Superpixel Attack improves attack success rates by an average of 2.10% compared with existing attacks. Most models used in this study are robust against adversarial attacks, and this improvement is significant for blackbox adversarial attacks. The code is available at https://github.com/oe1307/SuperpixelAttack.git.
引用
收藏
页码:141 / 152
页数:12
相关论文
共 50 条
  • [41] Black-box l1 and l2 Adversarial Attack Based on Genetic Algorithm
    Sun, Jiyuan
    Yu, Haibo
    Zhao, Jianjun
    2024 IEEE INTERNATIONAL CONFERENCE ON ARTIFICIAL INTELLIGENCE TESTING, AITEST, 2024, : 101 - 108
  • [42] DFDS: Data-Free Dual Substitutes Hard-Label Black-Box Adversarial Attack
    Jiang, Shuliang
    He, Yusheng
    Zhang, Rui
    Kang, Zi
    Xia, Hui
    KNOWLEDGE SCIENCE, ENGINEERING AND MANAGEMENT, PT III, KSEM 2024, 2024, 14886 : 274 - 285
  • [43] A review of black-box adversarial attacks on image classification
    Zhu, Yanfei
    Zhao, Yaochi
    Hu, Zhuhua
    Luo, Tan
    He, Like
    NEUROCOMPUTING, 2024, 610
  • [44] Context-Guided Black-Box Attack for Visual Tracking
    Huang, Xingsen
    Miao, Deshui
    Wang, Hongpeng
    Wang, Yaowei
    Li, Xin
    IEEE TRANSACTIONS ON MULTIMEDIA, 2024, 26 : 8824 - 8835
  • [45] Explore Adversarial Attack via Black Box Variational Inference
    Zhao, Chenglong
    Ni, Bingbing
    Mei, Shibin
    IEEE SIGNAL PROCESSING LETTERS, 2022, 29 : 2088 - 2092
  • [46] Pseudo-Siamese Network based Timbre-reserved Black-box Adversarial Attack in Speaker Identification
    Wang, Qing
    Yao, Jixun
    Wang, Ziqian
    Guo, Pengcheng
    Xie, Lei
    INTERSPEECH 2023, 2023, : 3994 - 3998
  • [47] ROBUST DECISION-BASED BLACK-BOX ADVERSARIAL ATTACK VIA COARSE-TO-FINE RANDOM SEARCH
    Kim, Byeong Cheon
    Yu, Youngjoon
    Ro, Yong Man
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3048 - 3052
  • [48] A Black-Box Attack on Neural Networks Based on Swarm Evolutionary Algorithm
    Liu, Xiaolei
    Hu, Teng
    Ding, Kangyi
    Bai, Yang
    Niu, Weina
    Lu, Jiazhong
    INFORMATION SECURITY AND PRIVACY, ACISP 2020, 2020, 12248 : 268 - 284
  • [49] Pixle: a fast and effective black-box attack based on rearranging pixels
    Pomponi, Jary
    Scardapane, Simone
    Uncini, Aurelio
    2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN), 2022,
  • [50] Rearranging Pixels is a Powerful Black-Box Attack for RGB and Infrared Deep Learning Models
    Pomponi, Jary
    Dantoni, Daniele
    Alessandro, Nicolosi
    Scardapane, Simone
    IEEE ACCESS, 2023, 11 : 11298 - 11306