Pixle: a fast and effective black-box attack based on rearranging pixels

被引:10
作者
Pomponi, Jary [1 ]
Scardapane, Simone [1 ]
Uncini, Aurelio [1 ]
机构
[1] Sapienza Univ Rome, Dept Informat Engn Elect & Telecommun DIET, Rome, Italy
来源
2022 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN) | 2022年
关键词
Adversarial Attack; Neural Networks; Random Search; Differential Evolution;
D O I
10.1109/IJCNN55064.2022.9892966
中图分类号
TP18 [人工智能理论];
学科分类号
081104 ; 0812 ; 0835 ; 1405 ;
摘要
Recent research has found that neural networks are vulnerable to several types of adversarial attacks, where the input samples are modified in such a way that the model produces a wrong prediction that misclassifies the adversarial sample. In this paper we focus on black-box adversarial attacks, that can be performed without knowing the inner structure of the attacked model, nor the training procedure, and we propose a novel attack that is capable of correctly attacking a high percentage of samples by rearranging a small number of pixels within the attacked image. We demonstrate that our attack works on a large number of datasets and models, that it requires a small number of iterations, and that the distance between the original sample and the adversarial one is negligible to the human eye.
引用
收藏
页数:7
相关论文
共 50 条
  • [21] SSQLi: A Black-Box Adversarial Attack Method for SQL Injection Based on Reinforcement Learning
    Guan, Yuting
    He, Junjiang
    Li, Tao
    Zhao, Hui
    Ma, Baoqiang
    FUTURE INTERNET, 2023, 15 (04):
  • [22] Iterative Training Attack: A Black-Box Adversarial Attack via Perturbation Generative Network
    Lei, Hong
    Jiang, Wei
    Zhan, Jinyu
    You, Shen
    Jin, Lingxin
    Xie, Xiaona
    Chang, Zhengwei
    JOURNAL OF CIRCUITS SYSTEMS AND COMPUTERS, 2023, 32 (18)
  • [23] An Approximated Gradient Sign Method Using Differential Evolution for Black-Box Adversarial Attack
    Li, Chao
    Wang, Handing
    Zhang, Jun
    Yao, Wen
    Jiang, Tingsong
    IEEE TRANSACTIONS ON EVOLUTIONARY COMPUTATION, 2022, 26 (05) : 976 - 990
  • [24] ROBUST DECISION-BASED BLACK-BOX ADVERSARIAL ATTACK VIA COARSE-TO-FINE RANDOM SEARCH
    Kim, Byeong Cheon
    Yu, Youngjoon
    Ro, Yong Man
    2021 IEEE INTERNATIONAL CONFERENCE ON IMAGE PROCESSING (ICIP), 2021, : 3048 - 3052
  • [25] A black-box adversarial attack strategy with adjustable sparsity and generalizability for deep image classifiers
    Ghosh, Arka
    Mullick, Sankha Subhra
    Datta, Shounak
    Das, Swagatam
    Das, Asit Kr
    Mallipeddi, Rammohan
    PATTERN RECOGNITION, 2022, 122
  • [26] TSadv: Black-box adversarial attack on time series with local perturbations
    Yang, Wenbo
    Yuan, Jidong
    Wang, Xiaokang
    Zhao, Peixiang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 114
  • [27] Restricted Black-Box Adversarial Attack Against DeepFake Face Swapping
    Dong, Junhao
    Wang, Yuan
    Lai, Jianhuang
    Xie, Xiaohua
    IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY, 2023, 18 : 2596 - 2608
  • [28] Dual stage black-box adversarial attack against vision transformer
    Wang, Fan
    Shao, Mingwen
    Meng, Lingzhuang
    Liu, Fukang
    INTERNATIONAL JOURNAL OF MACHINE LEARNING AND CYBERNETICS, 2024, 15 (08) : 3367 - 3378
  • [29] TSadv: Black-box adversarial attack on time series with local perturbations
    Yang, Wenbo
    Yuan, Jidong
    Wang, Xiaokang
    Zhao, Peixiang
    ENGINEERING APPLICATIONS OF ARTIFICIAL INTELLIGENCE, 2022, 114
  • [30] A New Meta-learning-based Black-box Adversarial Attack: SA-CC
    Ding, Jianyu
    Chen, Zhiyu
    2022 34TH CHINESE CONTROL AND DECISION CONFERENCE, CCDC, 2022, : 4326 - 4331