Black-box adversarial attacks by manipulating image attributes

被引:23
作者
Wei, Xingxing [1 ]
Guo, Ying [1 ]
Li, Bo [1 ]
机构
[1] Beihang Univ, Sch Comp Sci & Engn, Beijing, Peoples R China
基金
中国国家自然科学基金; 国家重点研发计划;
关键词
Adversarial attack; Adversarial attributes; Black-box setting;
D O I
10.1016/j.ins.2020.10.028
中图分类号
TP [自动化技术、计算机技术];
学科分类号
0812 ;
摘要
Although there exist various adversarial attacking methods, most of them are performed by generating adversarial noises. Inspired by the fact that people usually set different camera parameters to obtain diverse visual styles when taking a picture, we propose the adversarial attributes, which generate adversarial examples by manipulating the image attributes like brightness, contrast, sharpness, chroma to simulate the imaging process. This task is accomplished under the black-box setting, where only the predicted probabilities are known. We formulate this process into an optimization problem. After efficiently solving this problem, the optimal adversarial attributes are obtained with limited queries. To guarantee the realistic effect of adversarial examples, we bound the attribute changes using L-p norm versus different p values. Besides, we also give a formal explanation for the adversarial attributes based on the linear nature of Deep Neural Networks (DNNs). Extensive experiments are conducted on two public datasets, including CIFAR-10 and ImageNet with respective to four representative DNNs like VGG16, AlexNet, Inception v3 and Resnet50. The results show that at most 97.79% of images in CIFAR-10 test dataset and 98:01% of the ImageNet images can be successfully perturbed to at least one wrong class with only <= 300 queries per image on average. (C) 2020 Elsevier Inc. All rights reserved.
引用
收藏
页码:285 / 296
页数:12
相关论文
共 38 条
  • [1] [Anonymous], 2019, ARXIV190808705
  • [2] [Anonymous], 2017, ARXIV171010766
  • [3] [Anonymous], 2017, ARXIV
  • [4] Towards Evaluating the Robustness of Neural Networks
    Carlini, Nicholas
    Wagner, David
    [J]. 2017 IEEE SYMPOSIUM ON SECURITY AND PRIVACY (SP), 2017, : 39 - 57
  • [5] Mixed-dish Recognition with Contextual Relation Networks
    Deng, Lixi
    Chen, Jingjing
    Sun, Qianru
    He, Xiangnan
    Tang, Sheng
    Ming, Zhaoyan
    Zhang, Yongdong
    Chua, Tat-Seng
    [J]. PROCEEDINGS OF THE 27TH ACM INTERNATIONAL CONFERENCE ON MULTIMEDIA (MM'19), 2019, : 112 - 120
  • [6] Boosting Adversarial Attacks with Momentum
    Dong, Yinpeng
    Liao, Fangzhou
    Pang, Tianyu
    Su, Hang
    Zhu, Jun
    Hu, Xiaolin
    Li, Jianguo
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 9185 - 9193
  • [7] Dziugaite Gintare Karolina, 2016, A study of the effect of jpg compression on adversarial images
  • [8] Robust Physical-World Attacks on Deep Learning Visual Classification
    Eykholt, Kevin
    Evtimov, Ivan
    Fernandes, Earlence
    Li, Bo
    Rahmati, Amir
    Xiao, Chaowei
    Prakash, Atul
    Kohno, Tadayoshi
    Song, Dawn
    [J]. 2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION (CVPR), 2018, : 1625 - 1634
  • [9] Goodfellow IJ, 2015, 3 INT C LEARN REPR I
  • [10] Haeberli D., 1994, IRIS Universe Mag., V28, P8